What are the alternatives to calling for a pause on giant AI experiments?
The next generation of large language models beyond GPT-4 may present unique and unprecedented risks to society — but is temporarily halting their development the right way to go?
Last week a number of leading artificial intelligence experts — and many others — created something of a stir when they published an open letter calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
I haven’t signed the letter — not because I don’t think there’s a risk of potentially existential proportions emerging here (I do), but because having spent so long in the messy nexus of risks and benefits around emerging technologies, I’m not convinced that the proposed pause will have the intended effect.
But despite an increasingly pointed rhetoric from some AI experts deriding fears over the potential dangers of advanced AI — including from Meta’s Chief AI Scientist Yann LeCun — Large Language Models (LLMs) hint at a deeply complex risk landscape that we are, at this point, unprepared to navigate.
Diving into the nature of this landscape is a topic for another time (although some of the aspects are captured in this video). But what I do want to touch on briefly here is that nature of potential pathways forward.
The thoughts below are a lightly edited version of a response to a tweet from Gary Marcus (including links). We got into a short back and forth about governance approaches to LLMs and AI/AGI, and this is what I rattled off — it’s a little rough, but given the speed with which things are developing here, it’s worth posting:
Having worked on navigating the risks and benefits of advanced technologies for over 20 years now, the only thing I'm certain about is that there are no silver bullets around ensuring the benefits of AI/AGI while avoiding possible risks — but we do have some sense of how to frame the questions that are necessary to move forward.
A first step is moving away from an ethics framing to one around risk and socially responsible/beneficial innovation. A bioethics approach worked for recombinant DNA, but with the human genome project there was already a growing emphasis on ethical, legal, and social implications (ELSI) of the emerging science.
With nanotechnology, synthetic biology and a number of other technologies there was a growing shift to agile governance, anticipatory governance, and similar “soft law” approaches to managing risks and benefits of advanced tech. This shift recognized that, with a growing pacing gap between “hard” regulations and tech capabilities, alternative governance approaches were needed (especially as hard regs are a very unwieldy double edged sword). I would include here my own work on risk innovation (aimed at supporting agile decisions around intangible risks) and responsible innovation more broadly.
Over the past few years, AI has swung the conversation back to an emphasis on ethics (including the AI Principles developed in 2017)— in part because the people framing the question, while well-intentioned, haven’t necessarily been aware of the sophistication of conversations going on elsewhere (although full disclosure, I was there at the 2017 Asilomar meeting). But with LLMs the rubber is hitting the road with decision making in ways that ethics alone cannot help with—in part because while ethics help parse out what is considered right and wrong, they don’t on their own provide a practical framework for achieving safe and beneficial technologies.
This is a long preamble to say that, with LLMs, we need a combination of thinking around agile governance, “progressive” regulation (meaning innovative regulatory approaches), and novel approaches to societally beneficial innovation — all of which comes under the rubric in my work of navigating advanced technology transitions. And we need this, like, yesterday!
Practically, one possible way forward (but by no means the only one) is to rapidly convene a world congress of diverse leading experts, thought leaders, and decision makers, to develop a short and medium term roadmap for the beneficial development of LLMs and beyond. This process should include experts in governance, responsible innovation, ethics, complex systems, as well as creative and disciplinarily unbounded thinkers and thinking around technology transitions. It should also include smart and collaborative thinkers from the humanities, arts and social sciences — diversity of thought and perspective will be essential here. And it should also include representation from industry/innovators, academia, government, and civil society.
Selection of participants should be made on the basis of their ability to work constructively with others on an implementable roadmap, without ego getting in the way. And it should be international.
The congress should be professionally led to ensure it is outcomes-based and not a talking shop, with an aim of reaching implementable consensus guidelines and a roadmap on next steps; realizing that naive errors at this stage will deeply impact all stakeholders. And it should emphasize public private partnerships.
The bottom line is that the risks of LLMs and associated technologies are potential catastrophic, near-impossible to fully map out, and even harder to navigate. And yet the biggest risk is not taking action or, worse, assuming no action is needed.
Image: Generated by Midjourney from the prompt “Create a visually arresting abstract painting that captures the challenges of governing large language models and AI / artificial general intelligence in order to ensure the technology is developed safely”
Andrew says: "Practically, one possible way forward (by by no means the only one) is to rapidly convene a world congress of diverse leading experts, thought leaders, and decision makers, "
Er -So invite roughly the same self-referential elite thought-bubble that are currently driving (or losing control of?) hi-tech innovation trends? What's that hackneyed Einsten quote about using the same kind of thinking as when we created problems?
One of the big problems about the AI governance story as articulated by the Future of Life letter is that it fully invisibilizes the already present problems and threats of sub-GPT4 algorithmic decision making and non LLM AI (for people of colour, famers, low income and marginalized people for example). It does this by locating the 'AI problem' in some future 'existential risk' scenario that also threatens white men of privilege and therefore should now be taken seriously when rampant racist algorithms and oligopoly control over data to nudge behaviours was considered no big deal.. Thats why someone like Elon Musk can happily sign this letter - it doesn't challenge power one iota.
Convening a meeting of the great-and-the-good (even with a multistakeholder sprinkling of hand-picked NGO's for optics) will just continue this invisibilization of AI's current and real harms and strengthen elite power interests over AI . How about instead a democratically convened and governed set of people's assemblies moving towards a societal set of proposals through citizens juries for example - like the bottom up process that was behind the abortion referendum in Ireland. Then a moratorium isn't just a meaningless x number of months to hold back the opposition, but conditioned on a genuine societal discussion and engagement.
By the way great to see you on substack Andrew .. here's mine: http://www.scanthehorizon.org :-)
You write, "Large Language Models (LLMs) hint at a deeply complex risk landscape that we are, at this point, unprepared to navigate."
The history of nuclear weapons should demonstrate that we're unlikely to be prepared to navigate an AI future for at least decades. It's been almost 80 years since Hiroshima, and we still have no clue how to get rid of these things. No clue.
QUESTION: What are the compelling benefits of AI which justify taking on another substantial risk at a time when we already face so many others?
I've been asking this question repeatedly everywhere I go in AI land, and no answer yet.
Before we proceed with artificial intelligence we should first see if we can discover human intelligence. It must be out there somewhere...