Oppenheimer is as relevant to the future of AI as it is nuclear weapons
Watching Oppenheimer, it struck me afresh how powerful movies can be in opening up discussions around ensuring the beneficial and responsible development of transformative new technologies
I must confess that I didn’t want to like Christopher Nolan’s new box office-busting movie Oppenheimer. I often find him intense to the point of turgidity as a director, and I wasn’t sure he’d be able to elevate the film above this.
But elevate it he did — despite my fears, I was pretty impressed by the complexity of the story telling and the ways he managed to reveal both the complexity of Robert Oppenheimer and the challenges he — and others in the movie — grappled with.
What stuck with me most though as I left the movie theater was how the film relates to powerful and potentially life-changing contemporary technologies like AI.
The parallel with AI is certainly one that isn’t lost on Nolan (although I wasn’t aware of this as I saw the film last week). He’s even quoted as saying AI researchers are now facing their own “Openheimer moment”.
Of course, the potential catastrophic risks of artificial intelligence are very different to those represented by atomic fission and fusion, as are the circumstances and dynamics surrounding their development and deployment. And yet, as Oppenheimer so adeptly portrays, the trajectories that powerful technologies take are deeply intertwined with very human power dynamics, politics, and personal beliefs.
As a result, as I thought about the film I found myself playing out scenarios in my head around power plays associated with AI — along with the associated fights for truth and democracy in a world where artificial intelligence potentially undermines both, and the geopolitics of gaining mastery of one of the most powerful technologies to emerge in decades.
In these scenarios, AI doesn’t play out in such a “contained” way as the development of nuclear technologies — at least as it’s portrayed in the film. In contrast to much of what characterizes nuclear capabilities, the technologies surrounding AI are often hidden, dispersed, readily accessible, and driven by a complex and diverse communities of players. And the risks it potentially represents are equally hidden, dispersed, and driven by a complex and diverse community of shadowy actors and naive developers.
Nuclear weapons represent a more tangible and immediate risk than artificial intelligence. Yet the differences here shouldn’t diminish the challenges of ensuring the safe are responsible use of AI. Concerns are already mounting over how generative AI might be used to subvert democratic processes in upcoming elections. Signs are emerging of a potentially divisive AI “arms race” between the US and China. And experts have long been concerned about the weaponization of AI.
Understanding this complex risk landscape is challenging — especially when much of the complexity comes from the personalities, agendas, and power-wielding of humans in the system. And this is where movies like Oppenheimer can open up a window to ways of exploring and discussing ideas and pathways forward that might otherwise be hard to find.
Not surprisingly, this takes some effort — simply sitting for three hours through an intense biopic isn’t going to turn anyone into an informed expert in responsible innovation, and neither should it. But it would be good to see more people and organizations using platforms like this to move toward nuanced conversations around responsible and beneficial AI, and away from stances and statements that have all the sophistication of a bumper sticker.
If you haven’t seen Oppenheimer yet, it’s worth watching it (or even re-watching it) with an eye to how it opens a window into the politics behind powerful capabilities, and how this might in turn inform the successful, safe, and beneficial development of technologies like AI.
Of course, you should also enjoy it for what it is — a quite remarkable piece of movie making.
Afterword
It shouldn’t come as any surprise that this post was inspired by my own work in using movies to explore responsible innovation in nuanced ways. I actually set out to write about the movies I used in the book Films from the Future to do this — I’ve just updated many of the web-based resources associated with the book, and thought this was a good opportunity to post something on the films and where to watch them.
As it turned out, Oppenheimer scuppered this idea, although you can still read through the revamped resources here. And maybe, once Christopher Nolans cinematic aura has died down a little, I will get back to that original plan.
Until then though, enjoy the movie … and while you do, spare a thought to future challenges to come!
Your title seems to sum the situation up well...
"Oppenheimer is as relevant to the future of AI as it is nuclear weapons"
The choices Oppenheimer and his team (and political leaders) made burdened us with an ever present existential threat to the modern world that 75 years later we still have no clue how to overcome.
What choices do we wish Oppenheimer and his team had made instead?