Remember dial-up internet and flip phones? Artificial intelligence (AI) used to be like that – a complex technology mostly out of sight for everyday people. But in a recent webinar titled ‘Good AI Talks: Foundations and Governance’ led by Mardi Witzel, CEO and co-founder of PolyML, we learned how AI has rapidly transformed from a shadowy figure to the star of the tech world.
Witzel's talk explored the fascinating journey of AI, from its early roots in the 1950s with Alan Turing's pioneering ideas to the recent explosion of generative AI tools like ChatGPT. This "inflection point," as Witzel described it, has put AI power directly in our hands, raising both exciting possibilities and critical questions.
The webinar delved into these questions head-on. The first one being…
Is AI our saving grace or a potential doombringer?
Experts are divided. Big names like Elon Musk and Stephen Hawking have warned of AI posing an "existential threat" to humanity. Others, like Yann LeCun, are more optimistic, comparing AI to a powerful but controllable assistant.
So, what's the cause for concern? Traditional AI can be biased, secretive and even lead to job losses. New "generative AI'' throws a wrench in the works, creating realistic but fake content and raising copyright issues (think deepfakes and stolen news articles).
But it's not all doom and gloom!
Companies are developing responsible AI policies, and governments are working diligently to write regulations. It's a global effort to make sure AI stays our helpful sidekick, not a villainous mastermind.
And so, how can we ensure AI is developed responsibly?
Even without regulations, companies and organizations can govern AI use through things like transparency policies and data security measures.
Here's why governance is crucial: imagine AI writing fake news articles or promoting conspiracy theories. This is a real risk, as shown by researchers prompting ChatGPT to write from the perspective of a conspiracy theorist.
Similarly, Google's AI model, Bard (previously called Gemini), generated historically inaccurate images suggesting bias removal efforts.
These examples highlight the need for guardrails to prevent AI misuse. Developers are grappling with this challenge, and organizations are creating frameworks to promote ethical AI development and use. These frameworks include ethical principles and considerations like fairness, transparency and accountability.
What is happening around the world?
The regulatory landscape is rapidly evolving to keep pace with AI. There are over 60 national strategies and hundreds of AI governance frameworks being developed worldwide.
The EU recently passed the world's first comprehensive AI regulation, focusing on risk-based approaches. High-risk applications like social scoring and biometric surveillance face stricter requirements. They've also incorporated specific rules for generative AI models posing systematic risks.
The US approach is less prescriptive, with regulations mainly at the state level and a focus on specific areas like facial recognition. There's a proposed Senate bill on algorithmic accountability and an executive order on trustworthy AI for federal agencies.
China has some of the earliest AI regulations. They address internet algorithm recommendations, deepfakes and generative AI development. They even have a government repository for AI training data and require security assessments for models.
What to watch for in the future?
The future of AI is brimming with potential, but also complexities. Canadian regulations are on the horizon, and generative AI will likely spark legal battles over copyright. Most importantly, the global dialogue on responsible AI development and use will only intensify. As we move forward, it's critical to address the risks and ensure this powerful technology is harnessed for good. By working together – companies, governments and researchers – we can shape a future where AI strengthens society, not disrupts it.
This webinar was part of the Good AI Talks series, designed to equip founders with the knowledge and tools to build responsible AI products.