Is this how AI will transform the world over the next decade?
Anthropic's CEO Dario Amodei has just published a radical vision of an AI-accelerated future. It's audacious, compelling, and a must-read for anyone working at the intersection of AI and society.
Anthropic CEO Dario Amodei’s essay “Machines of Loving Grace”1 has been causing quite a stir this past week. At over 50 pages long (yes, I printed it out), it lays out an audacious and detailed vision of how artificial intelligence could accelerate progress and transform the world for the better. It’s also one that I think anyone interested in how AI might radically transform society over the next 10 - 20 years should read.
It’s easy to dismiss articles like this from tech leaders — they have a tendency to be deeply hyperbolic and overly optimistic in their pronouncements — but Amodei’s is different. He writes with depth and clarity, and with a vision that’s tempered with humility and reason.
And while I don’t agree with everything he says, there’s enough thought and substance in his essay to warrant careful consideration. I must confess that I rather enjoyed reading the essay and found it refreshingly enlightening.
That said, this is a thought piece that I think will challenge a lot of readers, whether they are fixated on rather narrow and unimaginative applications of generative AI, or are ideologically opposed to the concept of an AI-accelerated future.
But if Amodei’s essay is approached as a conversation starter rather than a manifesto — which I think it should be — it’s hard to see how it won’t lead to clearer thinking around how we successfully navigate the coming AI transition.
Given the scope of the paper, it’s hard to write a response to it that isn’t as long or longer as the original. Because of this, I’d strongly encourage anyone who’s looking at how AI might transform society to read the original — it’s well written, and easier to navigate than its length might suggest.
That said, I did want to pull out a few things that struck me as particularly relevant and important — especially within the context of navigating advanced technology transitions.
Framing The Possibilities
Amodei’s paper starts out by asking how AI might transform the world over a 5 - 10 year period, once “powerful AI” has been achieved — a point he believes we could hit as early as 2026.
The framing is intriguing as it avoids the hype surrounding visions of AI utopias and dystopias, and instead allows plausible — but nevertheless radical — possibilities to be considered.
This does, though, depend on us achieving what he calls powerful AI first.
Here, Amodei is refreshingly specific on what “powerful AI” might look like. Avoiding speculation around artificial general intelligence, or AGI, he develops the idea of AI as a technology that is capable of problem solving with agency, and at a scale and speed that surpasses what is possible by humans.
His bar for intelligence here is machines that can “prove unsolved theorems, write extremely good novels, write difficult codebases from scratch” and more. He also acknowledges that the nature of intelligence is complex and contested, and in doing so he foreshadows some of the challenges of AI-based problem-solving within a complex human society.
Much of the essay hinges on the concept of an AI-driven “compressed 21st century” — the idea that, with AI’s help, progress could be ramped up by a factor of 10 or so beyond what humans alone are capable of.
If this were possible, he argues, the next projected 100 years of human innovation and progress could be compressed into around 10 years. It’s something that may not sound that transformative, until you begin to realize that you may see advances in your lifetime that you otherwise assumed wouldn’t occur until after you’re long gone!
If it’s assumed that this accelerated speed of innovation could occur after the emergence of powerful AI, Amodei’s ideas are compelling. But this is an assumption that’s likely to be limited by a number of factors — and Amodei tackles at least some of these head-on.
These factors include (and I’m changing his order here):
The speed the world operates at outside of a machine-based intelligence, recognizing that, in Amodei’s words, “the world only moves so fast”;
Physical laws, that ultimately rate-limit what is possible (the speed of light being one of the better known ones);
Intrinsic complexity, where even an exceptionally powerful AI is likely to run into challenges in solving complex problems;
Constraints from humans, where human behavior is an inseparable part of intrinsic complexity — unless there are fundamental changes to what it means to be human; and
The need for data, where even an exceptionally capable AI cannot solve problems by simply making stuff up.
These limiting factors are what led to Amodei restricting his compression factor to one AI year of progress being equivalent to ten human years.
In some cases I think even this is overly optimistic — especially when human behavior is thrown into the mix — but even thinking through these limiting factors is an important part of the conversation around how AI might radically compress the rate of progress.
Navigating Advanced Technology Transitions
Building on the framework he constructs, Amodei goes on to explore — in some detail — possible AI-driven advances in five areas that cover biology and physical health, neuroscience and mental health, economic development and poverty, peace and governance, and work and meaning.
In each of these sections he considers what might be plausible within a 5 - 10 year window after the emergence of powerful AI. The arguments he makes are considered and thoughtful, and accompanied by a refreshing level of humility where Amodei admits he may be wrong. But he also argues that, while the future is unknowable, it’s still important to consider plausible futures as we think about how AI may transform the world.
And in this, his thinking aligns closely with my work around navigating advanced technology transitions.
Across his five areas of focus, Amodei explores how powerful AI could lead to changes that seem quite plausible on the face of it, and yet when the implications are considered would be radically transformative. These include eliminating infectious diseases, doubling the human lifespan, and freeing people from the shackles of their biological heritage.
These and other transformations would bring about disruptive changes through society that we are ill-equipped to navigate at the moment. And while visions of a disease-free future (for example) might be compelling, this and other predictions won’t play out well unless there are associated changes in how we learn how to navigate a future where past ideas, processes and ways of behaving simply don’t apply.
Amodei digs into these aspects of an AI-accelerated future in some depth across his five domains — and effectively begins the process of thinking through how we might navigate AI-driven transitions.
This is where I think his reasoning is both considered enough to pay attention to, and is also somewhat lacking. Despite a level of nuance and thoughtfulness that is impressive, there’s still a sense in the essay of powerful AI being able to “fix” problems that we perceive in the world — and in the people who inhabit it.
This is far removed from the “AI will fix everything” rhetoric of Sam Altman and others, but it’s still there — although I also get a strong impression that there’s also an openness to new ideas and thinking here.
In many cases within the essay Amodei acknowledges the limitations of his thinking — especially where he sees the complexities of progress within a future governed by humans, with all of our seeming oddities and irrationalities. Yet there’s still a sense that the world is made up of problems and solutions — and that AI will radically change the balance between the two.
And perhaps the area where my thinking diverges most from his is in the section on neuroscience and mental health.
Being Human
In the essay’s section on neuroscience and mental health, Amodei suggests that most mental illness can probably be cured, and that powerful AI will both hold the key to this, and compress a hundred years of human progress on mental health into a mere 10 years.
I suspect that many people will agree with the sentiment here. But the idea that there are relatively simple fixes to conditions that not everyone is likely to agree can be fixed — or should be fixed in a certain way — is contentious.
Certainly there are likely to be some cases that are clearer than others, but Amodei’s thinking here gets close to straying into territory where it’s unclear who decides what is “normal” and what needs to be “fixed”.
To be sure, there are neurological conditions that are desperately in need of new understanding if more people are to lead lives that they consider to be better than what is currently possible. But great care needs to be taken in who decides what “better” means.
Amodei’s approach to human norms and fixes extends to suggesting that mental illness may be prevented through genetic intervention, that behaviors and mental states that we don’t currently consider to be clinical diseases could be fixed in the future, and that in an AI-accelerated future the “human baseline experience” could be improved.
He even speculates that ,with powerful AI fixing mental illness, “the world as experienced by humans will be a much better and more humane place, as well as a place that offers greater opportunities for self-actualization.” He adds that “I also suspect that improved mental health will ameliorate a lot of other societal problems, including ones that seem political or economic.”
There may be some truth to Amodei’s thinking here. But an AI-accelerated future that interferes with what makes us “us” to fix things, begins to raise important questions around what it might mean to be human in the future.
This question threads through Amodei’s essay, and while I may disagree with him on some points, I’m glad that he grapples with this as seriously as he does.
It’s reflected in his acknowledgement of the contested understanding of what intelligence is; in his discussion on limiting factors associated with messiness of human society (although I sense some frustration there); in his focus on the need to take social inequality — especially when it’s driven by advanced technologies — seriously; in asking how humans will have meaning in an AI future; and in a driving desire to see AI as positively transforming what it means to be human.
At the same time, I think that a more nuanced understanding is needed of how people and society work — and what defines us and gives meaning to our lives — would be helpful in thinking about what an accelerated AI future might look like; if only because at the heart of every failed technological dream and unexpected technological turn, there are people behaving as people are wont to do.
This is touched on in the essay’s section on "“the opt-out problem”, where Amodei raises serious concerns around people opting out of AI-enabled benefits.
Here he notes that “this is a difficult problem to solve as I don’t think it is ethically OK to coerce people, but we can at least try to increase people’s scientific understanding — and perhaps AI itself can help us with this.”
Anyone who’s studied the complex relationship between people, society and technology innovation will know this as the largely outmoded “deficit model,” and is probably rolling their eyes at this point. The idea that people reject new technologies because they have a deficit of understanding that can be fixed through being better-educated was debunked decades ago. Yet it’s a misunderstanding that still exists in scientific and technological circles.
Rather, navigating advanced technology transitions — and especially the one that powerful AI is likely to bring about — will need to reflect how people actually behave, not just how we think they should.
And this extends to grappling with everything from addressing sociotechnical challenges like climate change and food security, to ensuring peace and effective governance in a complex globally interconnected society, to even understanding what it means to have a good life — wherever you might live.
The Compressed 21st Century
So should you read Dario Amodei’s “Machines of Loving Grace”?
Absolutely!
Despite it’s limitations (and any essay like this will have them), this is an important reflection on the near to mid term future of AI — and one that should be essential reading for anyone who is interested in how AI could transform the world over the next 10 - 20 years.
Even more importantly, the essay is an open invitation for further reflection and action on how to successfully navigate the coming AI transition. Because despite my disagreements with some of Amodei’s ideas, I agree with him that advanced AI is likely to change the world faster and more radically than most people currently realize.
Of course, this is dependent on reaching a level of “powerful AI” that lies beyond current capabilities — and it’s by no means certain that this will be achieved any time soon.
But it is a realistic possibility.
What makes this all the more plausible is that Amodei is not talking about massive leaps in superintelligence, or machines that can self-improve at vastly accelerated speeds, or even computers that are self-aware. Rather, he’s exploring a natural extension of emerging capabilities that can problem solve with agency in ways that are faster, and more transformative, that humans can achieve on their own.
If this is even a small possibility, we need to be preparing now for the technology transitions that will follow. We need new ideas, new thinking, and new pathways forward to ensuring these capabilities lead to the type of future we want, rather than one we do not.
And Amodei’s essay is a compelling kick-starter for the types of conversations and actions that need to be happening now.
Epilogue
There were a couple of things that struck me while reading Amondei’s essay that didn’t fit neatly into the article above, but that I wanted to touch on anyway.
The first is the observation that the first two of his focus areas — biology and physical health, and neuroscience and mental health — touch on concrete advances in capabilities brought about through advanced AI, while the last three — economic development and poverty, peace and governance, and work and meaning — are more speculative reflections on how powerful AI might transform society in areas that are both complex, and assumed to change very slowly.
I’d encourage anyone who’s especially interested in the potential transformative dynamic between powerful AI and society to read these. You’ll probably disagree with some of the ideas. But hopefully you’ll also be stimulated to think differently about how AI might accelerate the rate of change in these areas, and what that might mean for the future.
The second thing I wanted to acknowledge is Amodei’s call-out to Iain M. Banks’ The Player of Games — and as an avid Banks fan how could I not fall a little in love with the piece at this point!
It’s been a while since I read the book, and my recollection of the plot is that it was more complex in how it pitted the ease and comfort of a post-scarcity society (which aligns with Amodei’s vision of a vibrant AI-accelerated society) with the meaning that emerged within a harsh and hierarchical society. Yet Amodei’s point, drawn from the novel, is well taken that “basic human intuitions of fairness, cooperation, curiosity, and autonomy are hard to argue with, and are cumulative in a way that our most destructive impulses aren’t” — as long (and this is my caveat) that these intuitions are allowed to flourish.
And maybe this is what an AI-accelerated future might mean — as long as we learn how to steer it in that direction as we navigate the AI technology transition.
The title of Amodei’s essay comes from Richard Brauigan’s poem All Watched Over By Machines Of Loving Grace
Andrew Maynard writes....
"At the same time, I think that a more nuanced understanding is needed of how people and society work — and what defines us and gives meaning to our lives — would be helpful in thinking about what an accelerated AI future might look like; if only because at the heart of every failed technological dream and unexpected technological turn, there are people behaving as people are wont to do."
This sounds wise to me. And how do we humans behave? We take any new power given to us, and use it both for good and for ill. Everybody gets further empowered, both those with noble intentions and those with evil plans. And, as the scale of power available to us grows, the price tag for well intended mistakes does as well.
Instead of focusing on particular technologies as is so common, we should instead be taking a step back and shifting our focus to the knowledge explosion as a whole. We should be asking larger questions such as...
1) How much power can adult human being successfully manage? This is the very same question we routinely and sensibly apply to children, but then typically forget to ask of ourselves as adults. https://www.tannytalk.com/p/the-logic-failure-at-the-heart-of
2) How fast can society successfully absorb new technologies? Society may be able to adapt well to a technological change at one rate, but a faster rate of adoption may introduce too much change too quickly, and result in social calamity. People need time to adapt. More and bigger changes delivered ever faster is not automatically good.
A key concept to keep in mind is that as the scale of power available to us grows it becomes ever more possible for the threatening aspects of a new power to erase all the benefits. An obvious example here is nuclear weapons, which we seem to have learned almost nothing from.
As further example, imagine that super AI reaches and exceeds in delivering the revolutionary positive benefits predicted by AI's most ardent supporters. Poverty is eliminated, cancer is cured etc. Fantastic!!! But...
None of those benefits are going to matter too much if, for example, the game players on Wall Street misuse AI and manage to crash the global economy resulting in WWIII between the great powers. As a reminder, the game players on Wall Street played a central role in crashing the economy in the early 1930s, resulting in WWII, the biggest catastrophe in human history up to that point.
Don't listen to what the Silicon Valley tech bros say about the benefits of super AI. Ask them instead how they plan to manage the downsides, because if the downsides of any great power are not successfully managed, the benefits provided by that power may turn out to be irrelevant.
As you ask about managing the downsides of coming AI developments, keep the following in mind.
The only way to manage the future of AI development is to be able to manage those human beings around the world who are developing AI. And....
That's not possible. Ask the Silicon Valley tech bros how they intend to manage Putin's use of AI.
It's really challenging to reconcile the motives of a for-profit company that sees a marketing benefit to calling itself a "public benefit corporation" presumably for their own self enrichment without any appreciable public benefits. OpenAI is guilty of this as well and one day I might wake and see the game that it is, which is to get money and do something useful with it.