The double or nothing bet on AI "fixing the climate"
AI proponents are claiming future artificial intelligence capabilities will solve pretty much everything – including climate change. But only if we massively increase energy consumption to get there.
A couple of weeks ago OpenAI’s CEO Sam Altman claimed — rather boldly — that advanced AI will enable us to fix the climate. He also threw in space colonization and the “discovery of all physics” for good measure.
The trouble is, the advances that are necessary to achieve this speculative climate fix will need energy — a lot of it.
It almost reads like something out of a dystopian sci-fi novel — the claim that we can fix the planet and save humanity, but only of we risk betting everything on a miracle technology that’ll allow us to do this.
And yet the idea of doubling down on the AI bet to ensure a better future is taking hold.
It was recently revealed for instance that the infamous Three Mile Island nuclear power plant is being reopened to power Microsoft’s increasingly energy-hungry data centers.
And former Google CEO Eric Schmidt was reported this past week as saying that it's time for us to fully invest in AI infrastructure because climate goals are too lofty to reach anyway.
The switch from protecting the planet by reducing energy use and relying in renewables, to increasing energy production in support of solve-anything AI technologies, represents a profound reversal in how energy and the future are being framed.
It also throws a potential wrench into more conventional efforts to smoothly transition to a more sustainable energy future.
There are even indications that non-renewable energy sources are being considered to feed the voracious energy appetite of what Altman refers to as the “Intelligence Age.”
The trouble is, there is absolutely no guarantee that increasingly powerful AI-driven technologies will help address deeply complex challenges that are as much to do with human behavior as anything else.1
To be honest, I’m not even convinced that, impressive as today’s AI technologies are, there’s evidence that they will continue to scale in what they can achieve if only we pump enough energy and resources into them.
And this is a problem if we’re betting on short term losses leading to long term gains.
And then there’s the politics of energy production and use.
Beyond AI, there are arguments being made that economic prosperity and geopolitical power more broadly depend on ramping up use of non-renewable energy sources. It’s at the heart of the Trump campaign’s “Drill Baby Drill” slogan, as well as policy recommendations from organizations like the Heritage Foundation.
Interestingly, even though the Heritage Foundation’s stance on energy (and many other topics) is highly controversial, it aligns with the rhetoric around the supreme importance of access to massive energy resources if AI is to fuel economic growth and prosperity.
Whether the claim that doubling down on using every ounce of energy available is essential to creating AI systems that will solve the energy problem (and everything else that’s associated with it) is genuine or not, I don’t know.
But what is clear is that it’s a precarious strategy that depends on speculation, naive visions of the future, and a lack of understanding of how technology innovation and society are intertwined.
Because of this, emerging conversations around AI and energy — and the actions they are leading to — needs to be front and center of any initiative that’s focused on supporting sustainable futures.
It could be that there’s merit to ramping up the use of nuclear power to fuel AI server farms and data centers — although it troubles me that this represents a growth in energy demand rather than a redistribution of energy use. It may be that the short term use of non-renewables will lead to long terms AI-guided energy transitions — although I doubt it. It’s even conceivable —but only barely — that the AI energy bet might actually pay off.
What is certain though is that, without leadership from organizations that are at the heart of helping to build human-centric sustainable futures, Altman’s vision of an Intelligence Age where AI fixes everything, could go horribly, horribly wrong.
And that’s not the AI “fix” that I suspect most of us want to see.
If you do believe in the “AI can fix anything once it’s advanced enough” theory, the only logical conclusion you get to is that this includes “fixing” people — including who we are, what we do, and what makes us “us”. It may be an elegant engineering solution, but I’m not sure it’s what most people would aspire to!
This result could change the power calculus. Roughly, they're "speeding up multiplication" by getting the order of magnitude (exponent) correct and then being... liberal... with the details (mantissa). That it works is only surprising if you're also surprised that neural nets survive quantization so well; the robustness of deep neural nets continues to amaze.
https://arxiv.org/html/2410.00907
Tragedy of the commons or self fulfilling prophecies? 😅