Whether you’re asking Siri for the forecast, listening to a recommended playlist on Spotify or telling a chatbot about your customer service experience, it’s clear that artificial intelligence (AI) is a part of our daily lives. ChatGPT, for example, attracted over 100 million users within its first two months, a pace even WhatsApp and Facebook took years to achieve. As AI continues to rapidly expand, there’s growing concern about how we oversee its use and governance, especially in light of risks related to bias and privacy.
“Every organization should have a policy and framework on how AI is used,” said Niraj Bhargava, CEO of NuEnergy.ai.
Recognizing the need for checks and balances in the use of AI, NuEnergy.ai is stepping up to make sure responsible AI practices are being used.
“When we formed our company six years ago, we made a deliberate choice to focus on governance and trust in AI, rather than developing AI algorithms,” said Bhargava. “This decision positioned us as an important third-party independent assessor that ensures responsible and ethical AI use.”
NuEnergy.ai recently announced it’s secured a contract with the Government of Canada’s Employment and Social Development Canada (ESDC) with an eye to improving AI governance within ESDC’s Benefits Delivery Modernization Programme. NuEnergy.ai, a graduate of the Innovative Solutions Canada program, will put its Machine Trust Platform (MTP) and AI governance frameworks into action to achieve these goals.
“This is a very exciting announcement,” said Bhargava. “The most important reason why I’m excited is that we’re playing a very important role in responsible AI. So, it’s not just about a Canadian success story and technology. It’s really the right thing for AI and the need globally for responsible AI.”
The collaboration will focus on testing and practical implementations of AI governance, including education initiatives, policy updates and creating AI guardrails to protect public trust.
“AI raises a lot of questions, whether they’re existential risks of AI or day-to-day risks of not being fair,” said Bhargava. “Our objective is to address these critical questions and make sure we build and maintain public trust in AI technologies through oversight and governance.”
NuEnergy.ai’s MTP is a cloud-based software program that evaluates AI technologies across privacy, ethics, transparency and bias metrics. It gives organizations a clear assessment of their AI systems against regulatory guidelines and global standards, such as the Government of Canada’s Algorithmic Impact Assessment tool. The software incorporates a machine trust index that measures the reliability of AI applications over time and categorizes them into red, yellow or green zones based on predefined thresholds.
“AI is dynamic. It can drift,” said Bhargava. “Our trust index aggregates a numerical score and classifies it into red, yellow or green. We then work with the organization on what’s acceptable and what’s not, and with measurement, we can determine what is within acceptable guardrails.”
If an AI application scores in the red zone, NuEnergy.ai recommends corrective actions like additional training data to address any historical biases related to gender and race, for instance.
“We can identify what’s causing that bias and address it,” said Bhargava.
Bhargava says it’s important to monitor AI risks but this responsibility shouldn’t solely rest with big tech firms. He points to the comments of former UK Prime Minister Rishi Sunak, who, ahead of last year’s AI Safety Summit, said AI firms can’t be left to “mark their own homework” and there’s a need for government action.
“Independent third-party assessments are key for building trust in AI,” said Bhargava. “Our role as an impartial evaluator helps organizations ensure their AI systems are ethical, transparent and aligned with regulatory standards.”
As for a possible solution to these new problems? Bhargava turns to an old saying:
“If you don’t measure it you can’t manage it,” said Bhargava.