AI proponents are claiming future artificial intelligence capabilities will solve pretty much everything – including climate change. But only if we massively increase energy consumption to get there.
You write, "If you do believe in the “AI can fix anything once it’s advanced enough” theory, the only logical conclusion you get to is that this includes “fixing” people — including who we are, what we do, and what makes us “us”. It may be an elegant engineering solution, but I’m not sure it’s what most people would aspire to!"
Excellent, thank you for making this point.
Who we are, what we do, and what makes us "us" is what we're all made of. Thought.
Thought is the source of both our genius and madness as a species. Because the genius and madness arise from the same source, they are inseparable, forever two sides of the same coin. https://www.tannytalk.com/p/the-nature-of-thought-part-2
It's important to make a distinction between the content of thought, and the nature of thought.
Editing the content of thought is easy. You can agree with this post, or not, thus editing your thought content on this subject.
Editing the NATURE of thought is something else altogether. Even in the extremely unlikely event that changing the medium of thought was somehow possible, be assured that pretty much nobody would welcome such a radical change to our most fundamental human condition.
Without editing the NATURE of thought in some fundamental revolutionary manner, we are still stuck with the genius + madness equation. Which is to say, human beings will still be of limited ability. Which brings us to this....
Marrying an unlimited "more is better" knowledge philosophy to creatures of limited ability is a highly irrational train that will inevitably go off the tracks at some point.
What obscures this common sense reality from us is that the most respected and credible leaders of our society, especially the scientific community, are all riding on that train.
This result could change the power calculus. Roughly, they're "speeding up multiplication" by getting the order of magnitude (exponent) correct and then being... liberal... with the details (mantissa). That it works is only surprising if you're also surprised that neural nets survive quantization so well; the robustness of deep neural nets continues to amaze.
In my novel Paradox, the AI helps solve Nuclear Fusion. Problem is that it doesn't solve the underlying human problem of politics. It's a fun exploration into what could be and the risks: https://amzn.to/3YdU231
The bigger issue is all this talk of climate change is also baked in with a ton of politics where we haven't even done an analysis of what is optimal for the planet let alone the hubris we have into thinking we can stop it.
I think they think they can use AI to finally figure out nuclear fusion, or some other massively centralised energy production system and then *easily* reach that supply to all parts of the globe and solve the problem. But I don't think that takes into account anything about how societies are structured, how debt works, or how people work. Elon Musk's vision with solar panels, while also can be argued against, puts the reactor in the sky and people can individually connect their homes, which seems like a much more sustainable strategy.
I think that's true. There's incredible potential in AI helping to find ways of making challenges that have so far stumped humans resolvable -- including challenges in the energy space. But as you say, there are so many societal/human factors that play into this that make things complicated -- potentially resolvable, but only with a deep understanding of humanity.
It may be, as you say, that scaling laws stop at some point, but the empirical evidence thus far contradicts that (along with the intuition that more compute ought to equal more “intelligence”)
My money is on continued improvement in model capability for at least the next five years. Like you, however, I can’t connect the dots from that to “climate change solved”.
Thanks Mark -- the link to the Epoch AI report is really important, and a report that anyone taking a serious dive into AI and energy transitions needs to read carefully, especially as it takes a more holistic view of barriers and opportunities to generating and delivering energy to fuel scalable development.
As you indicate, there's a lot uncertainty around scaling laws and AI -- I oscillate back and forth here, both tending toward the position that compute at scale is more important/transformative than people often realize, while also suspecting that there also have to be radical advances in architecture to achieve effective scaling. However, in principle there's nothing to stop these advances from occurring :)
I'm also not opposed to the idea of heavy energy investments to develop transformative technologies -- I just think we should have a nuanced understanding of what we're trying to achieve and why (as well as the possible downsides) as we proceed :)
To be honest I think we're feeling our way here as a species, and can't ignore either the irresistible force of change, or the responsibility that comes with innovating in ways that force dramatic change that has far reaching implications. And asking tough questions is part of that process.
You write, "If you do believe in the “AI can fix anything once it’s advanced enough” theory, the only logical conclusion you get to is that this includes “fixing” people — including who we are, what we do, and what makes us “us”. It may be an elegant engineering solution, but I’m not sure it’s what most people would aspire to!"
Excellent, thank you for making this point.
Who we are, what we do, and what makes us "us" is what we're all made of. Thought.
Thought is the source of both our genius and madness as a species. Because the genius and madness arise from the same source, they are inseparable, forever two sides of the same coin. https://www.tannytalk.com/p/the-nature-of-thought-part-2
It's important to make a distinction between the content of thought, and the nature of thought.
Editing the content of thought is easy. You can agree with this post, or not, thus editing your thought content on this subject.
Editing the NATURE of thought is something else altogether. Even in the extremely unlikely event that changing the medium of thought was somehow possible, be assured that pretty much nobody would welcome such a radical change to our most fundamental human condition.
Without editing the NATURE of thought in some fundamental revolutionary manner, we are still stuck with the genius + madness equation. Which is to say, human beings will still be of limited ability. Which brings us to this....
Marrying an unlimited "more is better" knowledge philosophy to creatures of limited ability is a highly irrational train that will inevitably go off the tracks at some point.
https://www.tannytalk.com/p/the-logic-failure-at-the-heart-of
What obscures this common sense reality from us is that the most respected and credible leaders of our society, especially the scientific community, are all riding on that train.
This result could change the power calculus. Roughly, they're "speeding up multiplication" by getting the order of magnitude (exponent) correct and then being... liberal... with the details (mantissa). That it works is only surprising if you're also surprised that neural nets survive quantization so well; the robustness of deep neural nets continues to amaze.
https://arxiv.org/html/2410.00907
Thanks Mark!
In my novel Paradox, the AI helps solve Nuclear Fusion. Problem is that it doesn't solve the underlying human problem of politics. It's a fun exploration into what could be and the risks: https://amzn.to/3YdU231
The bigger issue is all this talk of climate change is also baked in with a ton of politics where we haven't even done an analysis of what is optimal for the planet let alone the hubris we have into thinking we can stop it.
More on that topic here:
https://www.polymathicbeing.com/p/the-climate-is-changing
Bottom line, there's a lot of bad science on all sides that obfuscates the problem and opportunity space.
Thanks Michael - and great call-out to the novel!
I think they think they can use AI to finally figure out nuclear fusion, or some other massively centralised energy production system and then *easily* reach that supply to all parts of the globe and solve the problem. But I don't think that takes into account anything about how societies are structured, how debt works, or how people work. Elon Musk's vision with solar panels, while also can be argued against, puts the reactor in the sky and people can individually connect their homes, which seems like a much more sustainable strategy.
I think that's true. There's incredible potential in AI helping to find ways of making challenges that have so far stumped humans resolvable -- including challenges in the energy space. But as you say, there are so many societal/human factors that play into this that make things complicated -- potentially resolvable, but only with a deep understanding of humanity.
Largely agree with all you’ve written, with one caveat: it looks possible we can just continue to scale what we’ve got up until around 2030:
https://epochai.org/blog/can-ai-scaling-continue-through-2030
It may be, as you say, that scaling laws stop at some point, but the empirical evidence thus far contradicts that (along with the intuition that more compute ought to equal more “intelligence”)
My money is on continued improvement in model capability for at least the next five years. Like you, however, I can’t connect the dots from that to “climate change solved”.
Thanks Mark -- the link to the Epoch AI report is really important, and a report that anyone taking a serious dive into AI and energy transitions needs to read carefully, especially as it takes a more holistic view of barriers and opportunities to generating and delivering energy to fuel scalable development.
As you indicate, there's a lot uncertainty around scaling laws and AI -- I oscillate back and forth here, both tending toward the position that compute at scale is more important/transformative than people often realize, while also suspecting that there also have to be radical advances in architecture to achieve effective scaling. However, in principle there's nothing to stop these advances from occurring :)
I'm also not opposed to the idea of heavy energy investments to develop transformative technologies -- I just think we should have a nuanced understanding of what we're trying to achieve and why (as well as the possible downsides) as we proceed :)
It speaks to the entire AI story of risking human extinction for their dream.
I am in PauseAI and I didnt think we had much in common, but seeing this kind of aggressive attitude might mean we can coordinate on this.
To be honest I think we're feeling our way here as a species, and can't ignore either the irresistible force of change, or the responsibility that comes with innovating in ways that force dramatic change that has far reaching implications. And asking tough questions is part of that process.
What is a good email for you?
easy to find me at andrew.maynard@asu.edu :)
This seems like a great thing to discuss about in this nuanced manner. I'll ping you to see if you'll like to be on a podcast.
Tragedy of the commons or self fulfilling prophecies? 😅