Artificial intelligence is evolving so rapidly that companies and governments face a real dilemma: Sit on the sidelines until the risks have been managed or jump in now and adapt as you go?

It’s a widespread concern with countless layers, many of which surfaced yesterday during a national conference in Ottawa that attracted more than 100 participants from industry, academia and government.

The daylong event – called Data Effects Ottawa: Canada’s AI Future – was organized by CityAge and sponsored by Communitech and Google Cloud, with support from Telus, the Ontario Centre of Innovation and the Big Data Hub at Simon Fraser University.

The day kicked off with a talk by Jonathan Rosenbluth of Cohere, a Canadian AI company that specializes in creating large language models (LLMs) that give companies the tools to create their own products that incorporate language AI.

“We’re at a unique moment – like the start of the Internet era – where the future is ours to seize,” Rosenbluth said of the AI era.

Despite the tremendous potential of LLMs, he said one of the challenges is that LLMs are “know-it-alls” that can “respond with all the confidence in the world,” even when they’re wrong. Part of the answer is to provide citations – the source of the information – so that humans can evaluate the accuracy and quality of an LLM response.

“It’s massively exciting but it’s not simple,” he said.

A recurring theme in the conference – first raised by Rosenbluth – was how to balance ethical and security considerations with powerful market forces and a technology that, by its very nature, is developing at an exponential rate.

“The market forces behind the drive for efficiency are too great to say we’re not going to do it because of ethical considerations,” said Shingai Manjengwa, Head of AI Education at ChainML.

What’s needed, she said, is a proactive approach that considers possible fallout from deploying AI products – such as job losses – and includes plans to address such impacts.

Looking to the past for an example, Manjengwa said society didn’t halt the development of electricity just because it put candlemakers out of work. Still, helping those displaced by technology must be a priority, she said.

Fellow panelist Joelle Pineau, VP of AI Research at Meta, agreed – but she said society needs to be “realistic” about the amount of upskilling that is possible. Nonetheless, she said humans remain a key element in a “partnership” with AI technology.

“How we balance the tensions will involve human input for a long time,” Pineau predicted.

In his talk earlier in the day, Rosenbluth acknowledged the dilemma of whether to stay on the sidelines until AI risks are better managed or jump in now. In his view, Canadians need to get over their traditional aversion to risk and just leap into the fast-moving AI game.

“The best way to get started is to build something small,” he said. “But we need to start now. We need to drive adoption as a country.”

Another panel addressed the related issue of AI regulation. It’s a hot topic worldwide. Just last week, U.S. President Joe Biden issued an Executive Order to establish new standards for AI safety, security and data privacy. And earlier this week, the UK hosted a summit at which global leaders (including Canada) agreed on an international declaration to address the risks of AI.

In 2017, Canada became one of the first countries in the world to launch a national AI strategy. The Pan-Canadian AI Strategy is based on three pillars: commercialization, standards, and talent and research. 

While several speakers at yesterday’s conference acknowledged Canada’s quick start, many noted that subsequent legislation – the proposed Artificial Intelligence and Data Act (AIDA), which is part of Bill C-27, the Digital Charter Implementation Act, 2022 – has been moving at a snail’s pace through the parliamentary approvals system.

One of several key concerns with the slow progress is that AI has evolved so rapidly in the past year that the proposed legislation is already out of date.

Summing up the challenge, one panel moderator – journalist David Reevely of The Logic – asked: “How do you write regulation-legislation in such a way that it provides certainty and yet is future-proofed, because stuff is changing so fast?”

Anna Jahn – Director of Public Policy and Learning, AI, for Humanity at the Quebec research centre known as Mila – suggested that part of the answer lies in better efforts to bring scientists and industry together to provide input on the latest developments in technology. That, and better approval processes that allow government to update policies without the need to go through the legislative process.

In a later discussion, Keith Jansa – CEO of the Digital Governance Council – said Canada needs to “codify” how we update regulations in order to speed up the process and keep pace with technological change.

“We need a mechanism to govern standards because standard-setting is faster than legislation,” he said.

Others noted the use of interim-style measures while formal legislation is working its way through the parliamentary system. One example is Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advance Generative AI Systems, introduced in September by François-Philippe Champagne, federal Minister of Innovation, Science and Industry.

The concept of “trust” was another key theme throughout the AI discussions.

Communitech’s Chief Technology Officer, Kevin Tuer, introduced the idea of the “economics of trust” – the tangible ROI generated by building a trusted relationship with customers, partners and employees.

Through its Good AI program, Communitech is helping founders understand the value of ethical AI and how to implement a “by design” approach that embeds ethical AI right at the start of product development.

The three tiers of the Communitech Good AI program include:

  1. Education and awareness.
  2. Ethical AI plan development.
  3. Ethical AI plan implementation.

“Our mission is to work with founders to build on this notion of trust and help them understand that the economics of trust suggest they need to be doing this now,” he said. “They need to do it because it makes good economic sense to do so.”

Wrapping up his presentation, Tuer said: “We believe that Good Ai is an obligation, not a choice; and it’s an opportunity, not a constraint. This is an opportunity for us to lead, not follow.”

Other speakers also highlighted Canada’s potential to be world leaders in AI. Canadian governments have supported AI research with considerable amounts of funding, they noted. As a result, we’re home to some of the most original thinkers on AI and some of the most innovative AI companies in the world.

“AI is a gift from Canada to the world,” said Mai Mavinkurve, a tech founder and Senior Fellow at the Centre for International Governance Innovation in Waterloo.

To keep pace with how other countries are leveraging that gift, Canada needs to take a “whole of government” approach to creating a national AI strategy that better integrates policy, funding and private-sector innovation and commercialization, said Senator Colin Deacon, an advocate for harnessing Canadian entrepreneurship with opportunities in digital technology.

“We’re good at converting money into research but we’re not good at converting research into money,” he said. “If we don’t start to worry about that, we’re not going to have the money to pay for the research and our grandchildren are not going to be very well off.”

As Cohere’s Jonathan Rosenbluth said at the start of the day, “We have all the pieces in this country to win at AI. The challenge is, How do we take this and leverage it to make our economy more successful, to make our country more successful?”