An IPCC For AI Is A Failure Mode
Brad Allenby takes a critical look at why an IPCC approach to safe AI will not work, and what is needed instead
“It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” — Sherlock Holmes, A Scandal in Bohemia
On November 1 and 2, the United Kingdom hosted an international conference on how to save the world from artificial intelligence, attended by 100 of the best and brightest. The “AI Safety Summit” was held at Bletchley Park, a resonant choice given that Alan Turing of the Turing Test fame worked there during WWII. It is only one initiative in a rapidly proliferating regulatory ecosystem. While the practicality of deciding how to save the world from existential threats, especially ones that are quite ill-defined, indeed almost entirely hypothetical at this point, may be questionable, the political pressures to do so are quite real.
Among the ideas that have been suggested is the creation of an international organization to study AI modeled on the Intergovernmental Panel on Climate Change, a sort of IPAI. While the intent is good, the suggested IPCC model is not. There are a number of reasons for this, but a few stand out.
The most important reason is that, while climate change and AI may both be existential threats to the status quo, they are very different domains that pose fundamentally different challenges. This is especially true given their relevant time cycles. Climate may be changing, but it is a reasonably slow change, which provides time for study, contemplation, and adaptation — and time for the slow process of international meetings, drafting documents and circulating them for comment, reaching agreement, and other aspects of formal international processes. In a word, the cycle time of climate change meshes with the cycle time of existing institutional models such as the IPCC.
An IPCC can try to focus on science, but an IPAI never could.
This is clearly not the case with AI. Any organization that draws on existing processes to track and understand AI will, from the start, be decoupled from the cycle time of the phenomenon it purports to be studying. It isn’t that large parts of the AI domain, like any infrastructure system, don’t remain stable — neural net technology arguably dates from 1957 when Cornell University psychologist Frank Rosenblatt demonstrated the first trainable neural network, called the “Perceptron”. But many of the critical implications of AI, especially the social, economic, cultural, military and security, criminal, and psychological impacts of new and powerful generative AI models, are rolling out in a chaotic and unprecedented global wave that changes by the week, if not more rapidly. No existing institutional model, especially not one designed to be an integral part of international deliberations, can ever cope with that rate of change.
The rate of change contributes to another critical difference between the climate change and AI domains: both may be complex adaptive systems, but the climate system is far more accessible to study and understanding than the relevant dimensions of the AI domain. Put bluntly, climate change is knowable; AI is not — at least for the foreseeable future. Thus, one can structure an IPCC to try to understand climate change, and not be incoherent; one cannot structure anything like an IPCC to pretend to know AI, especially if the intent is not to understand AI as a technology, but to understand the broader implications of AI. The latter, at its current and accelerating rate of evolution, is simply unknowable. An IPAI would be a symbolic yawp in the face of unknowability.
A second set of reasons revolves around the geopolitical implications of AI as opposed to climate change. Not for nothing did President Putin observe to RT in 2017 that “Artificial intelligence is the future, not only for Russia, but for all humankind . . . . Whoever becomes the leader in this sphere will become the ruler of the world.” To be sure, climate change has geopolitical implications, but at heart it is about natural earth systems, such as global atmospheric chemistry and physics. AI is, by contrast, a human product emerging from the cognitive ecosystem, and it is intimately bound up with national security and, for high technology firms, competitive advantage and, indeed, existence. An IPCC can try to focus on science, but an IPAI never could.
A less fundamental but still relevant difference is that the IPCC is tied to the Westphalian model of world order, dominated by states, whereas much of the competence, cutting edge R&D, supportive data lakes and physical infrastructure, and capability in AI, lies with private firms (one reason China and the U.S. governments are constantly fighting their national AI champions). AI is post-Westphalian, and the state-based response mechanisms so characteristic of the U.N., the E.U., and powerful states such as the U.S. and China are structurally inadequate from the git-go.
… the idea should be to create informal, broad-based, networked working groups, constantly communicating among themselves about new developments on an on-going, close to real time, basis.
This does not mean that a meaningful institutional response cannot be mounted. But such a response will be dysfunctional at best unless it accepts the unique challenges posed by AI in contrast to more traditional threats. In particular, what is required is an adaptive, agile, and constantly learning network, not dominated by states and governmental entities, and certainly not an international, institutionalized, organization. Rather, the idea should be to create informal, broad-based, networked working groups, constantly communicating among themselves about new developments on an on-going, close to real time, basis. The desired output should not be agreement on anything; rather, the network should focus on pure perception.
This focus should be on what AI in the wild is actually doing, not what various interested parties can hypothesize. Other organizations might take the output and regulate, or suggest laws or policies, or make investments, or create new bodies of soft law such as industry codes. But the network itself should have as its only goal the perception of the current state of the art in real time.
Moreover, given the geopolitical implications of AI, and the very different power structures, public and private, with existential interests at stake, the effort should not be to build any sort of international network. International coordination and cooperation may be feasible, especially regarding certain applications of AI, but it should not be integrated with the need to perceive as accurately as possible a rapidly changing and highly unpredictable domain.
One might be forgiven for worrying that current institutional responses to the rapid evolution of AI systems and uses are more panic than thoughtful public policy in action. After all, a reasoned response should at least try to be based on what’s actually happening, rather than nightmare and wistful thinking. AI in the wild, loose in society, remains largely unknown, but a panic response represents a lack of imagination and creativity, a choice, rather than a law of nature. We can and should do better.
Brad Allenby is President’s Professor of Engineering, and Lincoln Professor of Engineering and Ethics at Arizona State University. Previously, he was senior corporate attorney, then Environment, Health and Safety Vice President, at AT&T. Dr. Allenby received his BA from Yale University, his JD and MA (economics) from the University of Virginia, and his MS and Ph.D. in Environmental Sciences from Rutgers University. His recent work has focused on emerging technologies, especially in the military and security domains. Recent books include The Techno-Human Condition (with Daniel Sarewitz, 2011), The Theory and Practice of Sustainable Engineering (2011), Future Conflict & Emerging Technologies (2016), The Applied Ethics of Emerging Military and Security Technologies (2016), and Infrastructure in the Anthropocene (with Mikhail Chester, 2021).
It makes sense to respond to some immediate challenges without waiting for greater certainty about AI generally; one would not, for example, want to let identity theft run rampant, or disregard obvious environmental issues arising from, e.g., Bitcoin technology, while one sought a deeper understanding about generative AI. I think the caution comes in because many would-be regulators fail to differentiate between specific and addressable challenges, and broader policy implications of AI.
Interesting comparison in this analysis. Though I'd say there are many of the risks of AI technologies that are known. Exponential disinformation, fraud and identity theft, and child porn. Fake intimacy, automated bioweapons, destabilisation of nation states and disrupting social cohesion and trust etc etc...then there is the environmental footprint and supply chain externalities. We can acknowledge that there are policies, laws and regs in place to deal with everything from consumer and data protection to anti-trust and the environment. Monitoring and enforceability are always key considerations (systemic problems of regulatory capture aside). These technologies do introduce new categories of risks and at a pace that is deeply concerning. But I agree that the patterns of old governance and top down bureaucracy are not up to the challenge. I'd argue this goes for many problems interwoven with our metacrisis. So we need models of sense making that are more networked while also having governance that is more decentralised and distributed harnessing collective intelligence.