Riding the AI Tiger
Managing AI as part of a complex adaptive system will need far more imagination and creativity than we're currently demonstrating if we're to succeed
A guest post by Professor Brad Allenby
In the movie “Her,” Theodore Twombly, played by Joaquin Phoenix, falls in love with the AI-powered virtual assistant in his mobile phone, Samantha, voiced by Scarlett Johansson, only to find out to his distress that “she” is more different than he ever imagined. He has mistaken a profoundly alien meta-cognitive entity for something familiar, in this case a girl.
Today, everyone from countries to firms to academics to the U.N. is desperate to be seen as responsive to the many challenges of generative AI. Hysteria abounds. The policy world is awash with regulatory proposals. But we need to think harder about Theodore and his heartbreak, for most of these proposals are, in essence, assuming a girl.
In particular, we are making two implicit, fatally flawed, assumptions. First, we are assuming that we know enough, at this very initial stage of perhaps the most potent technology humans have come up with since printing and literacy, to regulate effectively. This is hubris at a cosmic scale. We further assume that we are basically confronting a simple system rather than a complex adaptive system (CAS). Policy tools that assume knowledge and control that we can’t possibly have simply fail under such circumstances.
A simple system can be effectively regulated only because you know enough about it, and how it works, to be able to predict the outcome of your regulatory action: “if I implement Regulation A, I will almost certainly get Output B”. If you regulate the use of asbestos in buildings, for example, you can reliably predict that you will reduce exposure to asbestos particles in buildings over time, and thus protect human health. But if you are facing a CAS, you don’t have the information or knowledge to know a priori what the outcome of your regulation will be. So your regulatory paradigm is, instead, “if I implement Regulation A, who the heck knows what will happen?”.
Moreover, CASs are dynamic and constantly changing, and can neither be known, nor controlled, by a central authority. Remember that one of the causes of the collapse of the Soviet Union was its inability to run a modern economy, a quintessential CAS, using the centralized authority of Gosplan, the State Planning Committee.
We are frantically trying to build a Gosplan for AI. Indeed, we can hardly help it, for our tools and institutions assume adequate knowledge and simple systems. Accordingly, in our haste and ignorance we are making a profound category mistake. If Theodore’s heartbreak is inadequate warning, the fall of the Soviet Union should at least be cautionary.
Yet to date China and the West, the two major civilizational players in the AI arena, are both making that familiar category mistake (although, to be fair, they both appear to be beginning to realize it). The approach of the West fails because the core assumption required for regulation to be an effective tool, that the output of a regulatory system can be predicted from the input, is wrong. Chinese approaches have an additional problem. Because the Party must always be able to exercise centralized and explicit control, it is trying to require predictable results from AI. This fails because in a CAS control mechanisms and knowledge cannot be centralized. Rather, they are diffused throughout dynamic networks. This is patently obvious from the experience with generative AI to date, which has if anything reinforced the reality of AI as black box.
Basically, there are three obvious challenges for a successful approach to managing AI:
The “cycle time problem”: No regulatory structure or institution has the capability to match the rate at which these technologies are evolving.
The “knowledge problem”: No one today has any idea of the myriad ways in which AI generally, and generative AI specifically, are now being used across global societies and economies.
The “scope of effective regulation problem”: Such potent technologies are most rapidly adapted by fringe elements of the global economy, especially the pornography industry and global criminal enterprises; such elements are precisely those which will pay no attention to regulation anyway.
Even if, however, reinventing Gosplan for AI is not a viable strategy, there are three obvious responses which could support far more effective management of AI.
The first and most important step is to create informal, broad-based, networked working groups - say, at the level of the E.U., or the U.S., or China - which would have as their only task perceiving current realities on an on-going, close to real time, basis. Such entities wouldn’t suggest laws or policies or regulations, make investments, impose ideological mandates, or critique data bases, because any such task would bias the core task of perception. It would only seek to know what is out there, and what is going on. Knowledge comes before management – at least, it should.
Second, slow regulatory and institutional responses have already decoupled from the cycle time of AI evolution. They cannot hope to keep up, nor should they, for doing so might undermine their current effectiveness (law works as a critical cultural and civilizational infrastructure precisely because it is stable and predictable over time). Rather, an additional focus should be to create agile, adaptive capability, especially in critical areas such as security. This is not impossible; to some extent it’s how the U.S. Food and Drug Administration and the European Medicines Agency operate today, albeit with far more developed institutional frameworks and far deeper knowledge resources. Moreover, while broad regulatory initiatives are likely to be dysfunctional, there will undoubtedly be specific issues that can be addressed en passant. The failure of Gosplan does not mean that societies do not routinely impose laws and regulations on economic behavior.
Indeed, many companies and governments have recognized that formal institutions, policies, laws, regulations, and other tools are too inflexible when faced with rapid and unpredictable change. Thus, new approaches such as “soft law” – think of industry codes of conduct, for example - and institutional initiatives such as “skunk works” have long been used to provide flexibility and rapid adaptability even within more structured and formal institutions.
Managing AI is unquestionably a fundamental challenge to today’s governments and institutions. It is not insurmountable, but there is little to suggest that we are yet deploying the imagination, and the creativity, required.
Brad Allenby is President’s Professor of Engineering, and Lincoln Professor of Engineering and Ethics at Arizona State University. Previously, he was senior corporate attorney, then Environment, Health and Safety Vice President, at AT&T. Dr. Allenby received his BA from Yale University, his JD and MA (economics) from the University of Virginia, and his MS and Ph.D. in Environmental Sciences from Rutgers University. His recent work has focused on emerging technologies, especially in the military and security domains. Recent books include The Techno-Human Condition (with Daniel Sarewitz, 2011), The Theory and Practice of Sustainable Engineering (2011), Future Conflict & Emerging Technologies (2016), The Applied Ethics of Emerging Military and Security Technologies (2016), and Infrastructure in the Anthropocene (with Mikhail Chester, 2021).
Perhaps we need to deploy even more "imagination and creativity" than what you've described : The obvious conclusion to humanity's inability to control its creations is to employ one or more CASs to regulate other CASs. This may require encompassing hierarchical, adversarial or 'red queen' evolutionary arrangements, but it more closely replicates how human and artificial complex systems already operate (for example, the internet). Would such biomimicry be any worse than what soon-to-be redundant people and power structures are proposing for themselves?