A Canadian company with strong ties to Waterloo Region has launched a software platform to support the ethical and transparent governance of artificial intelligence (AI) systems.

Ottawa-based NuEnergy.ai says its Machine Trust Platform software will be at the centre of a pilot project with the Royal Canadian Mounted Police (RCMP).

“The new Machine Trust Platform is a Canadian tech breakthrough that gives organizations configurable one-stop access to qualified, globally-sourced AI governance measurements and assessments,” said Niraj Bhargava, co-founder and CEO of NuEnergy.ai and a graduate of the University of Waterloo’s systems design engineering program.

Launched three years ago, NuEnergy.ai has hubs in Ottawa and Waterloo, as well as employees in Montreal, Toronto and Vancouver.

The startup provides AI management software and AI governance consulting services that help clients set up “guardrails" to mitigate risk and protect trust in the technology and the organization.

Those guardrails consist of governance plans and the software needed to measure essential AI “trust parameters" such as privacy, ethics, transparency and biases.

Artificial intelligence technology is found in a wide array of commercial and government systems, from Siri to online chatbots, video streaming services, self-driving cars, facial recognition and more.

Given AI’s dependence on data sets, its rapidly growing use has ignited a global debate about privacy, security, ethics, biases, cultural sensitivities and human rights. The goal, according to governance advocates, is to ensure that AI technologies are understandable, transparent and ethical. 

The World Health Organization, for example, issued a report this past June that recommended basic principles for governing the use of AI in the delivery of health care. The list includes transparency, accountability, inclusiveness, equity and the promotion of human well-being, safety and the public interest.

Elsewhere, the European Union is discussing a proposed Artificial Intelligence Act. Closer to home, Canada’s federal government has a set of guiding principles for the use of AI, as well as an Algorithmic Impact Assessment tool to aid in the process of selecting an automated-decision system. In Ontario, the provincial government is in the midst of developing a Trustworthy Artificial Intelligence Framework.

Communitech CEO Chris Albinson has been urging business and government leaders to see “ethical AI” as a distinct Canadian asset that can distinguish our AI products and services in the international market.

 NuEnergy.ai’s Bhargava agrees.

 “The approach to AI from China, the United States and Europe (are) three very different models,” he said in an interview with Tech News. “Canada has tremendous capability in the R&D of machine learning and AI, but our differentiation should and could be the ethics of AI and the trustworthiness (associated with Canada’s international reputation).”

On the consulting side, NuEnergy.ai helps clients understand the latest in AI standards, principles and governance practices. It then works with clients to develop a customized governance framework that suits the individual needs of the organization. The Machine Trust Platform integrates with the governance plan to provide an “open and transparent” way to track, assess and measure performance, as determined by the governance framework.

“The idea is really to have technology that can help monitor and measure the trustworthiness of artificial intelligence and machine learning,” said Bhargava.

One of the challenges with AI technologies is the problem of “drift.” This is when the AI system begins to perform differently, or less effectively, than originally intended. There are a number of possible causes, from outdated data sets to inputs and parameters that lose their relevancy over time.

“There’s a life cycle (to AI technology), from data to developing the models, to tuning them, to deploying, and we also talk about ‘drift’ of those models,” said Bhargava. “Depending on where you are in the life cycle, you may have different questions and guardrails.”

The RCMP pilot is a big deal for NuEnergy.ai. Not only is it the company’s first official deployment of its Machine Trust Platform, the results could determine whether NuEnergy.ai is permitted to pilot its technology and services with other federal departments and agencies.

“We’re delighted to be in partnership with the federal government,” said Bhargava. “It isn’t easy for a tech startup to find the right path into a larger organization.”

That path was opened in part through Innovation Solutions Canada, a federal program to support the scaling and growth of innovative Canadian companies by having the federal government act as a first customer.

In addition to government and public-sector clients, NuEnergy.ai is pursuing private-sector customers.

“These questions of transparency and governance and ethics are critical (to all organizations),” Bhargava said.

 His message to organizations that use AI is short and to the point.

 “You’ve got to get the guardrails up early and put practical methods in place. Don’t wait for the crisis to occur.”