A short video primer on AI risks
Back in 2018 I produced a short primer on ten potential risks of AI for the YouTube channel Risk Bites. With the advent of ChatGPT and other AI bots I thought it was worth a revisit/update
It’s being intriguing to see how ChatGPT and other AI chatbots have rekindled the debate around the potential risks of artificial intelligence and the need for responsible AI.
Of course, many of these concerns aren’t new — but the urgency with which they are being explored and debated is.
Back in 2018 I produced a Risk Bites video on 10 potential risks presented by AI. This was designed to be a primer on plausible risks—as someone who studies responsible innovation and navigating the balance between risks and benefits, I wasn’t interested in hyperbolic speculation, but I did want to lay the foundations for nuanced discussion.
Like all Risk Bites videos, it was aimed at a wide audience—and short! And while it grappled with complex issues, it relied on the hallmark Risk Bites simplicity and authenticity, threaded through with humor.
That was five years and an AI lifetime ago though. It had been a while since I’d watched the video, and I was interested see if it was still relevant in the light of recent advances in AI capabilities.
To my surprise, the original video is not bad—but I did feel that some minor updates were needed.
The result was this refresh:
Much of this matches the original video, but there have been a few tweaks to align it with the emergence of ChatGPT and LLMs.
Despite the simplicity of the video, it touches on ten risks that remain important as developers, regulators, users, and others, grapple with developing responsible AI while balancing risks and benefits.
These include:
Technological dependency [1:18]
Job replacement and redistribution [1:38]
Algorithmic bias [1:56]
Non-transparent decision making [2:15]
Value-misalignment [2:40]
Lethal Autonomous Weapons [2:58]
Re-writable goals [3:13]
Unintended consequences of goals and decisions [3:26]
Existential risk from superintelligence [3:47]
Heuristic manipulation [4:09]
Despite its simplicity, hopefully the video is a useful primer as more and more people get into the debate around AI risks and regulation.
(And if you want to know more about the Risk Bites approach, there’s actually a paper on it!)
All these risks could be avoided if we were intelligent enough to put AI on hold for now.
Before we take on more risk we should first prove that we are capable of meeting the challenge presented by the risks we already face. This is the same procedure we would use for a teenager asking for their first car, prove that you are ready.
1) Get rid of nuclear weapons
2) Meet the challenge presented by climate change in a manner far more credible than currently.
This won't happen of course. Further evidence that we aren't ready for AI. We're going to keep accumulating risk until something blows up in our face. Then maybe we'll learn?