Unpacking AI in the 2024 World Economic Forum Global Risk Report
Artificial intelligence features prominently in WEF's 19th Global Risk Report, indicating growing concerns around the technology's potential to disrupt social and economic systems
Read the headlines swirling around this year’s World Economic Forum Annual Meeting in Davos, and you could be forgiven for thinking we’re heading for a global AI catastrophe. And yet the focus on artificial intelligence in the risk report is more nuanced and less sensationalist than you might think from media coverage.
AI is certainly prominently featured at Davos this year — driven in large part by WEF’s recently released Global Risk Report 2024. (Here’s Stanford’s Institute for Human-Centered Artificial Intelligence’s Marietje Shaatke commenting on this prominence directly from Davos). This is the first year the report has explicitly covered AI. And while the headlines it’s driving may be sensationalist, the report itself is far more measured.
Earlier this week I provided a snapshot of how technology has featured in the Global Risks Report over the past eighteen years, and how this has changed from year to year. Here — as promised — I want to dive a little deeper into AI’s presence this year. But first it’s worth looking back and seeing how AI has increased in prominence in the report since its inception in 2006.
A Brief History of AI in the Global Risk Report
The fist year that AI is explicitly mentioned in the Global Risk Report is 2014. It wasn’t a ranked risk that year, but it was flagged up in a call-out box on existential risks — with the note that “[a]mong other existential risks is the possibility that breakthroughs in artificial intelligence could move rapidly in unexpected directions …”
The following year, the risk category “massive and widespread misuse of technologies” was introduced to the report — a category which included AI, along with 3D printing, geo-engineering, and synthetic biology. However misuse of technologies did not appear in the top ten risks for that year.
By 2016, “massive and widespread misuse of technologies” had evolved into “adverse consequences of technological advances” — a category that included AI. But once again this AI-relevant category didn’t make the top ten risks for the year.
The adverse consequences of technological advances risk category remained in the Global Risks Report until 2022. Between 2017 and 2020 it continued not to appear in the top ten risks. Then in 2021, things began to change. That year it was ranked 4th in “existential risks” emerging over the next 5 - 10 years, although it still didn’t rank in short to medium term risks.
“Adverse consequences of technological risks” was relegated to eighth position in 2022 for long term risks, but again didn’t make the short to medium term risks lists. Then in 2023, it was replaced by “adverse consequences of frontier technologies” (defined as “Intended or unintended negative consequences of technological advances on individuals, businesses, ecosystems and/or economies. Includes, but is not limited to: AI, brain-computer interfaces, biotechnology, geo-engineering, quantum computing and the metaverse”).
Once again, this AI-encompassing category did not make the top ten risks (ranking 32 out of 32 in short term risks), but was ranked 18 out of 32 with respect to long term risks.
Which brings us to 2024.
Short Term Risks from AI in 2024
In this year’s report, the decision was made to split “adverse outcomes of AI technologies” out of “adverse outcomes of frontier technologies” (while still keeping the second area). With this new area of risk, AI risks were ranked 29th out of 34 in the short term, and 6th out of 34 in the long term (based on a 10 year horizon).
This seems to mark an inflection point in awareness around AI-related risks, although AI does not explicitly dominate the year’s risk rankings. However, there are prominent technology-related risks in the report that are very much related to AI — including misinformation and disinformation (ranked number one in the short term), cyber insecurity (4th), technological power concentration (12th), and censorship and surveillance (21st).
This represents a clear jump in concerns around AI-associated risks from 2023 — aided perhaps by risks associated with “misinformation and disinformation” (ranked 16th in 2023) being moved from the area of societal risk and into technological risk in 2024.
Despite this jump, AI is by no means the most prominent risk identified in 2024. Although misinformation and disinformation top the short term ranking, extreme weather events (2nd), pollution (10th), societal polarization (3rd), interstate armed conflict (5th) and other risks, attest to a global landscape where climate change and economic/geopolitical tensions continue to dominate.
The Longer Term Perspective on AI Risks
In the ten year risk outlook, environmental concerns sweep the top four slots in 2024, covering extreme weather events, critical changes to Earth systems, biodiversity loss and ecosystem collapse, and natural resource shortages.
If anything, this report is a call to action on climate, not AI.
And yet, AI-related risks come a close second to climate in the long term, with misinformation and disinformation ranked 5th, and adverse outcomes of AI technologies ranked 6th. And when survey participants were asked to select “up to five risks that you believe are most likely to present a material crisis on a global scale in 2024” they put “AI-generated misinformation and disinformation” in second place.
Misinformation and Disinformation
Despite a lack of explicit emphasis on AI risks in the short term, artificial intelligence does feature prominently in discussions around misinformation and disinformation in the report.
Here, the discussion is broad, and addresses an increasingly complex landscape around mis- and dis-information. Within this, the report acknowledges that “easy-to-use interfaces to large-scale artificial intelligence (AI) models have already enabled an explosion in falsified information and so-called ‘synthetic’ content, from sophisticated voice cloning to counterfeit websites”.
In this context, AI — and generative AI in particular — is clearly seen as a disruptive enabler of misinformation and disinformation, and one that will be especially relevant in 2024 as nearly 3 billion people around the world vote in national elections.
Here, according to the report, “The implications of … manipulative campaigns could be profound, threatening democratic processes”.
Beyond democratic processes though, there is a concern that misinformation and disinformation will continue to disrupt society, both intentionally and unintentionally. Soberingly, the report notes “the proliferation of misinformation and disinformation may be leveraged to strengthen digital authoritarianism and the use of technology to control citizens. Governments themselves will be increasingly in a position to determine what is true, potentially allowing political parties to monopolize the public discourse and suppress dissenting voices, including journalists and opponents”.
How AI will play out in scenarios like this remains to be seen — but the meteoric rise of extremely powerful and highly accessible platforms that can be used to propagate mis- and dis-information suggests that the role of AI is going to become increasingly significant — aided and abetted, of course, by powerful individuals and organizations that not only benefit from the manipulation and disruption this will produce, but actively use disinformation to persuade people that they are legitimate sources of truth.
More than Misinformation and Disinformation
Looking at the longer term potential risks associated with AI, the report notes that “Unchecked proliferation of increasingly powerful, general-purpose AI technologies will radically reshape economies and societies over the coming decade – for better and for worse. Alongside productivity benefits and breakthroughs in fields as diverse as healthcare, education and climate change, advanced AI carries major societal risks.”
The report also recognizes the potentially disruptive synergies between AI and other technologies, stating that “[AI] will also interact with parallel advancements in other technologies, from quantum computing to synthetic biology, amplifying adverse consequences posed by these frontier developments”.
Concerns associated with reaching artificial general intelligence (AGI) are only briefly touched on in the report — in contrast to some headlines from this past year. Rather, of greater prominence in the report are concerns around more mundane (but still highly disruptive) unintended consequences of AI. As the report’s authors state: “Intentional misuse is not required for the implications to be profound. Novel risks will arise from self-improving generative AI models that are handed increasing control over the physical world, triggering large-scale changes to socioeconomic structures.”
These “mundane” risks include vulnerabilities in constrained and “singular” supply chains supporting AI development; concentration of AI power in the hands of a few models and companies; and anti-competitive practices. They also cover the need for strong ethical guardrails — especially in areas like healthcare — and the dangers of less wealthy economies being persuaded to cut corners on ethical use as they trade data and for access to cutting edge technologies.
This last concern is further highlighted in the risks of a growing AI divide between high- and low-income countries and, as a result, differential access to the potential benefits of AI advances across areas like economic productivity, finance, climate, education, and healthcare.
The report also emphasizes the potential dangers of AI boosting warfare capabilities — especially in cyber warfare and autonomous weapons systems. These are risks that are as much political as they are technological in nature, with the report pointing out that “[a]bstentions and votes against a draft UN resolution relating to autonomous weapons systems last year were notable”. As a result, the report concludes that" “[t]here remains a material chance … that these systems could be empowered to autonomously take decisions on lethal actions, including goal creation and the selection of targets” and “[t]he potential for miscalculation in these scenarios is high”.
Next Steps
Despite a reasonably grounded assessment of near term and emerging risks associated with AI, the Global Risk Report is rather lackluster in its recommendations on action and next steps. This is perhaps to be expected for what is, in essence, a crowdsourced report where innovative and creative ideas are likely to be subsumed by regression to the mean of conventional thinking. And so it’s no surprise that the three areas of possible action highlighted include the well-worn areas of public awareness and education, national and local regulations, and global treaties and agreements.
Considerable thought and debate is already occurring globally around the last two areas — including the recent report on Governing AI for Humanity from the United Nations High Level Advisory Board on AI. And my strong sense is that the Global Risks Report is simply echoing ideas and initiatives that have been emerging over the past few months.
There’s also been a lot of talk around public awareness and education — including raising AI literacy — but my sense is that there’s a lot less action taking place here.
The report states that “AI literacy could be integrated into public education systems and trainings for journalists and decision-makers to not only understand capabilities of AI systems but also to identify trustworthy sources of information”. This is a great start — but it barely scratches the tip of the iceberg around the role, purpose and impact of AI understanding and decision making across society.
The recommendations for action aside, this year’s Global Risks Report does raise awareness on the mounting challenges and opportunities around AI in ways that could lead to more positive and coordinated global action. Certainly having AI as high up on the agenda at Davos as it is this year is a good step forward. We just need to ensure that the momentum continues beyond the meeting, and that the various movers, shakers and influencers meeting in Switzerland this week keep the conversation well into 2024 and beyond.
Addendum
This really has very little to do with AI, but having accused the Global Risk Report of “regression to the mean of conventional thinking,” I though that the two “next global shock” call-outs below that appear in the report were actually rather insightful.
The first is on breakthrough in quantum computing. I actually think that any future global shock in this area will be driven by quantum technologies more broadly than just quantum computing. But it’s worth paying attention to the report’s call-out box on quantum computing in full:
“Quantum computing could break and remake monopolies over compute power, posing radical risks in its development.64 Criminal actors have already launched harvest attacks (also known as “Store Now, Decrypt Later”, or SNDL) in anticipation of a cryptographically significant computer. Trade secrets across multiple industries, including pharmaceuticals and technological hardware, could be compromised, alongside critically sensitive data such as electronic health records and sold to the highest bidder. Large or even global infrastructure – such as banks, power grids and hospitals – could also be paralyzed. Yet it is not just widespread proliferation of this level
of compute power that is concerning. Although arguably a tail risk,66 if cryptographically-significant quantum computing capability is covertly achieved and subsequently uncovered, it may rapidly destabilize global security dynamics.”
The second global shock that caught my attention is associated with the outsized influence of unelected billionaires (something that’s been more than a little obvious over the past year or so to anyone using the platform formerly known as Twitter).
Again, from the report’s call-out box:
“Technological power in the hands of the unelected is seen by numerous [report] respondents to be a bigger concern than power concentrated in government. The influence of Big Tech companies is already transnational, competing with the likes of nation states, and generative AI will continue to catalyse the power of these companies and associated founders. Although the influence of these companies is predominantly exercised in the regulatory sphere for now, control over dual-use, general purpose technologies will continue to confer significant and direct power to private actors. The risk of unilateral action by individuals could rise in a variety of new domains with significant consequences – such as the use of civilian satellites in the war in Ukraine.”
As they say, watch this space …