Governments, educators and private companies all must act quickly to rein in the biases and excesses of autonomous systems driven by powerful artificial intelligence, a lunchtime symposium at CIGI heard Wednesday. The price of not acting is an existential threat to the fabric of human society.

“What this conversation comes down to for me is humans doing things to other humans,” said Donna Litt, COO and co-founder of the Waterloo-based AI startup Kiite and one of three panelists who took part in the discussion at CIGI, the Centre for International Governance Innovation, titled Responsible Artificial Intelligence.

“[It’s about] humans making decisions for other humans,” continued Litt, “and, in a number of circumstances, taking decisions away from other humans, and doing that at scale, without consent and without knowledge of the long-term implications.”

Moderated by Courtney Doagoo, a CIGI post-doctoral fellow in international law, the panel included California Polytechnic State University assistant professor of philosophy Ryan Jenkins and University of Waterloo computer science research professor and lawyer Maura Grossman. The panelists framed the problems posed by AI – algorithms infused with racial and gender bias and autonomous weapon systems, robots and vehicles that make decisions that are potentially harmful to their human masters – and then described the cost of not addressing those problems and, finally, laid out potential solutions.

“The fact that AI can go through reams of data that humans couldn’t go through invites us to collect more and more data, more and more voraciously while nurturing the impression that more data is always better,” said Jenkins.

“This is a recipe for injustice and a recipe for serious concerns. Once these technologies are released into the world, there’s no putting the genie back in the bottle.”

Citing the development of nuclear weapons as an example, Jenkins said, “It’s extraordinarily difficult to un-invent technology. If there’s any hope of crafting these technologies so they become tools for good rather than tools for evil we have to do that before they’re deployed.”

Grossman, a self-described social scientist working “in a hard computer science department,” described the difficulty in getting any one group to take responsibility for addressing the ethical implications of AI.

Computer science students, she said, are only focused on optimizing their algorithm. “They’re not concerned with where the data came from, whether it’s clean or biased.”

Likewise, Grossman said, the lawyers and law students who would be in a position to craft policy or pose questions about the moral or legal implications of AI “don’t understand the technology” and “they don’t know the questions to ask.

“So both [groups] think it’s not their problem. We have to help them see it’s their problem. Because if it’s not their problem, it’s nobody’s problem, and that scares me.”

The discussion, before an audience of more than 100, was one in a series of occasional talks falling under the umbrella known as the Data Hub Sessions, which are focused on the use of data and hosted by Communitech.

Closeup of panelist Donna Litt on stage

Kiite co-founder and COO Donna Litt. (Communitech photo: Sara Jalali)


As for solutions, and ensuring the ethical deployment of AI, Litt said that responsibility in part lies with the firms themselves that deal in the AI sphere. Companies, she said, have choices about who they align with and what they decide to buy. She described an example where her company, Kiite, declined to purchase a particular vendor’s data due to the way it was obtained, opting instead to go with that of another firm.

“The spirit in which they’re obtaining that data means something,” she said. “In discovering that, it was an easy choice. Vendor 2 did not make the cut. Vendor 1 did.”

Grossman, asked by Doagoo about the role of government, said that its involvement must be carefully weighed. “Nobody wants to over-regulate and be behind the eight-ball with innovation. Not all AI presents the same risk profile. It’s shouldn’t all be put in the same basket.”

But, she said, federal oversight agencies might nevertheless have a role to play.

“Maybe we need something like an FDA for high-risk algorithms,” she said, referring to the U.S. Food and Drug Administration.

Jenkins pointed out that a recent focus on the moral implications of technology, driven at least in part by the past year’s wave of news stories about the misbehaviour of tech companies, is having an impact.

“There’s increasing public pressure that’s being put on Silicon Valley,” Jenkins said. “And at the same time a kind of moral insurrection has gripped Silicon Valley which is coming from employees themselves. People are walking out. People are signing petitions. People are quitting their jobs in protest.

“So you are starting to see the tables turn.”

And he said an ethical governance model already exists that could be applied to artificial intelligence – that of bioethics.

“In the ’60s and ’70s a lot of doctors became very concerned with the kinds of things going on in medicine,” Jenkins said.

“Some doctors and philosophers said you’re doing things that seem morally wrong. You’re doing things that seem ethically mistaken.”

A partnership resulted, he said. “As a result, the field of bioethics blossomed and it’s a fantastic success story.”

In short, the panel agreed that solutions to the problem of AI’s widespread deployment are available. Education, public pressure, interdisciplinary studies – where computer students are also saturated with ethical training – can make a difference.

“This is a problem that we can solve,” said Grossman. “We have the resources.”

Far shot of the entire stage with panelists centered and a presentation playing

The scene Wednesday at CIGI, site of a roundtable discussion entitled Responsible Artificial Intelligence. (Communitech photo: Sara Jalali)