Category Confusion Complicates Efforts to Regulate AI
Brad Allenby considers how effective regulation of AI will critically depend on avoiding conflation of very different levels of functionality.
The title of a recent article in The Economist neatly summarizes the status of efforts to regulate AI: “The world wants to regulate AI, but does not quite know how: There is disagreement over what is to be policed, how and by whom.”
Indeed. But the challenge isn’t simply the usual confusion around agendas both real and hidden, the state of the art, who is to regulate what, whose ideology is to be served, and so forth. Rather, nascent efforts to regulate AI also suffer from category confusion. In particular, current discussions of AI regulation tend to conflate three very different levels of AI functionality in ways that not only lead to unnecessary complexity, but raise the very real possibility that resulting regulatory efforts will not only be ineffective, but dysfunctional.
This is unfamiliar territory in part because AI is not the same as previous foundational technology systems in that it operates at three levels:
As a new enabling infrastructure across many if not most existing social, institutional and physical systems;
As an infrastructure in itself; and
As a critical subsystem of an emerging meta-infrastructure, the cognitive ecosystem.
Not surprisingly, it is at the most obvious and instrumental level that many of the immediate concerns about AI arise. After all, that is the level at which most people are seeing the effects of AI in their lives. Thus, for example, generative AI capability is transforming higher education, from traditional homework essays which can now be simply produced to order by ChatGPT, to research and grant functions increasingly augmented by AI products, to (whisper it sub rosa) scholarly publications written partly or even largely by generative AI.
The practice of law is being fundamentally restructured. The traditional infrastructures with which engineers are concerned, from energy and the electric grid to water and power systems to intelligent buildings and road, rail, air, and freight transportation networks, increasingly rely on AI for safe and efficient maintenance and operation. Businesses increasingly use AI for everything from preparation of public relations and marketing information to accounting to strategic analyses of their large data pools. Insurance and financial firms and institutions increasingly integrate AI throughout their activities, building it into global financial networks. The music and creative industries are overwhelmed with AI outputs that are increasingly competitive with professional human efforts. AI tools in many fields, from healthcare to policing and security, raise concerns for privacy activists. Cybersecurity, which almost always lags the technological frontier, is a continual challenge.
However, AI is not just an increasingly important input to many if not most familiar operations, institutions, and infrastructure systems; it is a highly complex, rapidly evolving infrastructure in itself. Although most people only think of software when AI is mentioned, it is in fact platformed on highly complex, integrated, networks of servers, chips, transmission technologies, consumer electronics, and other physical artifacts supported by global supply chains and manufacturing facilities. This in itself is not unprecedented; in being both an enabling infrastructure and an infrastructure in itself, AI resembles to some extent previous revolutionary technologies such as electricity. At this level, geopolitical competition is fierce, and the strategic implications of failure significant.
But AI is also a major component of an emerging meta-infrastructure, the cognitive ecosystem. This ecosystem integrates functionality from a number of more traditional technologies and infrastructures from communications systems such as 5G, to computational capabilities in everything from home devices and automobiles to vast server farms, to AI tools at all scales, to the regulations, institutions, firms, and social media systems that co-evolve with the technologies. Schematically, it can be framed as consisting of three domains: the data economy, the cognitive infrastructure, and the institutional and services infrastructure. Its defining characteristics include:
Functional components linked together operationally by ever more powerful information and communication technologies, and AI networks.
Multi-scalar functionality distributed throughout the ecosystem, from chips and mobile personal devices to AI-based arrays that learn from real time global inputs (generative AI products and Tesla provide examples of the latter).
Global distribution of functionality.
Rapidly accelerating evolution of emerging systemic and behavioral capabilities.
Learning and information processing functionality at all levels, usually with little if any human input or supervision.
Powerful geopolitical and private competitive forces that drive accelerating evolution, and are highly resistant to competent regulation for a number of reasons.
This level in particular is neither understood nor appreciated by policymakers or the public. Thus, for example, the EU, the US, China, and other political entities are rushing to regulate AI, but none of the proposals to date show any consideration for impacts of proposed regulation on the functioning of the cognitive ecosystem.
While this three level schematic is, of course, simplistic, it is nonetheless helpful. For one, it suggests that one of the major sources of the confusion that The Economist rightly observes is a failure to couple regulatory initiatives to appropriate levels. In general, it is the first level effects that most people will notice, and with which they will be concerned; it is also the level where targeted regulation is liable to be most possible and least dysfunctional in that it will have the least potential for dysfunctional and unintended impacts. The second level raises strategic concerns, and is to some extent the focus of various trade and security policies, especially between the U.S. and China. The challenge here is to ensure that regulations intended for the first level don’t adversely impact policy initiatives and goals at the second level.
The real challenge is the third level. This has so far generated meme-level hysteria – “AI will destroy us!” “AI will save us from climate change, and war, and . . . (name favorite evil here)!” – but little understanding and less insight.
Regulation by hypothesis is a dangerous game in such an environment, and should be proceeded by at least some increased effort to perceive, and understand, what the emergence of the cognitive ecosystem actually means.
Brad Allenby is President’s Professor of Engineering, and Lincoln Professor of Engineering and Ethics at Arizona State University. Previously, he was senior corporate attorney, then Environment, Health and Safety Vice President, at AT&T. Dr. Allenby received his BA from Yale University, his JD and MA (economics) from the University of Virginia, and his MS and Ph.D. in Environmental Sciences from Rutgers University. His recent work has focused on emerging technologies, especially in the military and security domains. Recent books include The Techno-Human Condition (with Daniel Sarewitz, 2011), The Theory and Practice of Sustainable Engineering (2011), Future Conflict & Emerging Technologies (2016), The Applied Ethics of Emerging Military and Security Technologies (2016), and Infrastructure in the Anthropocene (with Mikhail Chester, 2021).
"AI is not the same as previous foundational technology systems". Hmm... I'm reminded of Paul Simon's "every generation throws a hero up the pop charts". You could apply the same three levels you describe to the Internet, with its foundational protocols and overlaid distributed applications. In fact any AI is entirely reliant on these deeper foundations for cobbling together sensible-sounding or looking outputs, ones which resonate with humans and their own distorted views of the world. Without being prompted and trained on these, AIs can't and won't do anything much at all.