In Bill Joy's "Why The Future Doesn't Need us" AI is nowhere, and everywhere
23 years ago, the Sun Microsystems co-founder raised serious concerns about out-of-control technology innovation. As we grapple with the latest wave of AI, it's time to revisit his article.
In April 2000, Sun Microsystems co-founder Bill Joy shook the world of tech in a wide-ranging article in Wired magazine.
Despite a successful career as a technologist and entrepreneur, his essay "Why The Future Doesn't Need Us" marked a point of existential introspection for Joy, fueled by visions of out-of-control advanced technologies destroying life as we know it.
Of course the world has moved on since 2000. But as we collectively grapple with the latest wave of AI-based technologies and the concerns that come with them, Joy’s article remains essential reading—-even though it doesn’t explicitly mention artificial intelligence.
Joy’s article is an endearingly rambling yet deeply insightful exploration of technology-driven existential risk. It plays with intriguing thoughts and half-formed ideas as if we are privy to an internal struggle for clarity—you can imagine him working through his angst in the early hours as he drafted the piece. He even concludes “I’m up late again — it’s almost 6 am. I’m trying to imagine some better answers, to break the spell and free them from the stone.”
It also feels genuine and humble — here’s a leader in his field recognizing that he needs to be thinking beyond the confines of his expertise as he grapples with what the future might be like.
Along the way he invokes the likes Ray Kurzweil, Richard Feynman, Ted Kaczynski, and even the Dalai Lama, as he works through his fears and tries to find a pathway forward. And he adeptly uses ideas that are new to him to expand his thinking.
The result is a disarming, engaging, and provoking article — and one that captures a moment when it truly felt that, perhaps for the first time since the invention of the atomic bomb, emerging technological capabilities were beginning to exceed humanity’s grasp in potentially destructive ways.
At the heart of Joy’s fears were self-replicating technologies — technologies that have the ability to set off a chain reaction of replication that not only places them outside human control, but that threaten to obliterate humanity as we know it.
As he was writing, genetic engineering was changing the face of what was possible in the world of biotechnology, and tensions were rising between the promise of generically altered organisms, and the fear that these would come with unknown and untold risks.
To Joy, biological systems were the original self-replicating system. And so the speculation that human tinkering could somehow activate a chain of events that led to a cancer-like explosion of unintended consequences made sense to him.
It’s a fear that, thankfully, hasn’t played out as yet with genetic engineering—in part because it’s fiendishly hard to create high risk genetically modified organisms using 20th century technologies. But Joy was writing before CRISPR/Cas9 gene editing and the current wave of gain of function research, and it would be interesting to see where his thoughts lie now with the incredible advances we’ve seen in these areas in recent years.
Genetic engineering was only one of Joy’s concerns though. The concern that grabbed headlines at the time, (and even led indirectly to an influential review by the Royal Society) was nanotechnology.
Joy was both intrigued and worried by speculations that we could one day outdo nature and create artificial self-replicating systems using nanotechnology. It’s an idea that remains at the fringe of nanoscale science and engineering (and one that I am deeply skeptical of). But in 2000, influential thinkers like Ray Kurzweil and Eric Drexler were convinced that this was an inevitable extension of emerging science.
What emerged was a fear that engineered nanoscale machines would one day be able to make replicas of themselves by utilizing the atoms and molecules that surrounded them, and that this could lead to an uncontrolled exponential wave of self-replication that ultimately converted everything into a sea of nanobots.
This so-called “gray goo” scenario is deeply flawed—even beyond the scientific lack of feasibility around creating nanoscale self-assemblers, there are always rate-limiting factors that quickly curb the type of exponential growth that drove fears of a gray goo apocalypse. And yet despite this, speculations around out of control nanobots resonate deeply with concerns around value-misalignment in modern-day AI.
And this feeds into the third leg of Joy’s triumvirate of existential risk—intelligent robots.
Joy never uses the term “artificial intelligence” in his article. This may be because he was writing at the tail end of the second “AI winter” where seemingly promising research on artificial intelligence had seemingly failed to crystallize into progress (it took the growing abilities of deep learning and the compute power of GPUs to re-energize the field in the 2000’s). But he was also intrigued by the possible outcomes of physically fusing computers with machines and, by extension, fusing these with humans.
Once again, Joy was deeply concerned by self-replication. As he writes, “… robots, engineered organisms, and nanobots share a dangerous amplifying factor: they can self-replicate. A bomb is blown up only once—but one bot can become many, and quickly get out of control.”
Joy’s thinking around the risks presented by intelligent robots was driven by concerns about physical threats, and infused with speculative thinking from the time around where humans might fit in with a robot-dominated future. But under the surface, his worries reflect many current concerns around modern day AI.
These concerns are rooted in the challenges of developing powerful technologies that emerge faster than our collective understanding of their potential impacts—especially where there are positive feedback loops that accelerate the disconnect between power, impact, and understanding.
From Joy’s perspective, these positive feedback loops were driven by self-replication, and the ability of an emergent technology to get out of hand and expand its disruptive capabilities faster than it could be contained.
Taken literally, I’m not sure that physical self-replication is our biggest concern with emerging technologies. But as a metaphor, it’s a powerful one—especially when applied to AI.
With current developments around machine learning, large language models, and AI more generally, we are creating technological capabilities that are growing in their potential, their integration within society, their disruptive capacity, faster than the long term (or even medium to short term) consequences can be easily understood and navigated. Here, there is a metaphorical self-replication taking place where researchers, developers, and users, are eagerly building on and extending what is possible with AI, often with little though to the consequences (although there are pockets of serious thinking around responsible AI — thankfully).
The explosion in LLM-based technologies, and creative ways of using and augmenting them is just one example of this.
This emerging positive feedback loop around AI is only exacerbated by the increasing confluence between AI, genetic technologies, and nanoscale science and engineering. Ironically, if it was possible to create runaway genetically modified organisms, or self-replicating nanobots, it’s advanced AI that will help bring this about as it vastly amplifies mere human capabilities.
There is an added dimension to the metaphor of self-replication that goes beyond Joy’s physical systems though, and one that I must confess worries me a lot. And this is the extension of the metaphor to “self-replicating” ideas, convictions, beliefs, and ideologies that are catalyzed and even promoted by advanced technologies.
With the increasing ability of generative AI to seductively slip under the checks and balances of our ability to reason and critique, are we setting ourselves up for being irresistibly manipulated by the machines we make, or the people who make them? It’s a perspective on self-replication that’s becoming ever-more real with the advent of machines that are adroit at manipulating language and how this in turn influences how we think, feel, believe, and act.
Of course, it’s easy to slip into speculation and hype here. Yet the capabilities we’re beginning to develop are so powerful, and so-far beyond our ability to navigate potential consequences through a “business as normal” approach, that it’s worth at least pausing to think about how we ensure these technologies lead to a better future.
This is where I believe that everyone grappling with the current wave of advanced technology transitions should (re)read Bill Joy’s piece. Beyond the specifics, it’s a reminder that if we rush into the future full of optimism, but without thinking deeply and broadly about potential consequences, we do so at our peril. It’s also a timely reminder that we need to find the time and humility to think about what we’re doing and where we’re going, before it’s too late.
And even though Joy doesn’t explicitly mention AI, his underlying message remains a powerful directive to the developers and users of increasingly powerful artificial intelligence-based technologies:
Think about the consequences now, before it’s too late.
Great article Andrew, thanks.
Personally, I don't think this is that complicated if we focus on the big picture instead of details. We only need to know two things:
1) Knowledge development feeds back upon itself leading to an accelerating rate of knowledge development.
2) Human maturity, judgment, wisdom etc does not advance at an accelerating pace, but at an incremental pace at best.
https://www.tannytalk.com/p/knowledge-knowledge-and-wisdom
Imagine plotting knowledge development against maturity development on a graph. We'd see the two lines diverge at an accelerating pace.
Nobody can know for sure when, how, why these diverging lines will result in a crisis. Predicting such details seems impossible. But the simple logic of the graph makes a convincing credible case that continuing on the current "more is better" course will somehow, someday, some way lead to calamity.
Most commentators on such subjects like to speculate about the nature of particular technologies. Will chatbots become AGI etc.
A more reliable, less speculative manner of conducting the analysis is to focus on the other side of the equation, we the users of such technologies. Human nature hasn't changed substantially in thousands of years.