Technology innovation is often seen as a problem or a solution when it comes to building better futures. But the relationship between tech and the future is far more complex than this.
You write, "That said, it’s becoming increasingly clear that we need to rethink the relationship between technology, people, and the future, if we’re to thrive as a species."
Yes, thank you. The word "relationship" seems the key point in your statement. At it's heart, this is really a philosophical problem more than a technical one.
The knowledge philosophy the modern world is built upon assumes that "more is better" when it comes to knowledge and power. This is a simplistic outdated philosophy left over from the 19th century and earlier, when we were unable to develop knowledge and power that could threaten the system as a whole. In that earlier era "more is better" made sense.
But we no longer live in that old era. That old era ended at 8:15 am on August 6, 1945 over Hiroshima Japan. Seventy five years ago. And we STILL don't get it.
And the fact that we still don't understand a new era that began before most of us were born does help illustrate the key point. Human beings are limited creatures, just like every other species on the planet. An open ended "more is better" relationship with knowledge and power is in direct conflict with the reality of our limitations.
Nobody can know exactly what our limitations might be, but by clinging to an outdated "more is better" knowledge philosophy from the past we are racing towards those limitations as fast as we possibly can.
Solutions? Sadly, there is really little evidence that we will be capable of understanding the nature of today's modern world through the processes of reason alone. The "more is better" knowledge philosophy tells such a happy self congratulatory story that we are unlikely to willingly give it up.
But, when reason proves itself insufficient, pain comes to the rescue.
What level of pain will be large enough to get our attention and cause us to seriously question our outdated knowledge philosophy, while still being small enough so as to not prevent learning and adaptation? Humanity might get lucky, and our education could be delivered in such a dose.
I like how you emphasize the need for breadth, depth, and integration in learning about the past of human relationships with technology in the Afterword. Interestingly enough, these are dimensions of polymathy, and this is what attracted me to your article – it appeared to be highly polymathic. In my book "Why Polymaths?", I arrive at the conclusion that the future is polymathic.
You might consider Rebecca Solnit's distinction between optimists, pessimists, and the hopeful.
> Hope locates itself in the premises that we don’t know what will happen and that in the spaciousness of uncertainty is room to act. When you recognise uncertainty, you recognise that you may be able to influence the outcomes – you alone or you in concert with a few dozen or several million others. Hope is an embrace of the unknown and the unknowable, an alternative to the certainty of both optimists and pessimists. Optimists think it will all be fine without our involvement; pessimists adopt the opposite position; both excuse themselves from acting. It is the belief that what we do matters even though how and when it may matter, who and what it may impact, are not things we can know beforehand. We may not, in fact, know them afterwards either, but they matter all the same, and history is full of people whose influence was most powerful after they were gone.
I thought about using something like that, or a techno-realist, but shied away from both as both of them lead down a conceptual alley that leads to thinking about a technology as something separate to our history of interdependent development. Still wide open to debate though.
I think it's a good term and not a cop-out. It's pragmatic and avoids a false binary. I like pragmatist over -realist because between all the AI and human hallucinations, I'm pragmatically unsure what's real.
We're already getting close to mainstream subsumption of technology into our bodies (it's already reached our conscious minds). So is it worth the effort to try and direct human and technological evolution? Might not sticking to our existing random walk (or crawl) remain sufficient, with Darwinian style winnowing of the least fit forms of government, technology, ethics, religion, etc? There is, after all, no particular hurry – or predefined goals – to achieve to get where we think we should be going. It would be a sorry form of existence if humanity converged on being solely concerned with 'throw me into more tech to grow me some more money'.
Have you talked to ASU colleague Brad Allenby much about this over the years? You have me recalling his work on the techno human condition with Dan Sarewitz!
Yep - lots : ) And good reminder of their work -- they were arguing something similar back in 2013, although with more of a focus on how technology and humanity are intertwining and changing the very essence of being human, something that is part of the landscape here that I barely brush against in this piece, but also deeply important
You write, "That said, it’s becoming increasingly clear that we need to rethink the relationship between technology, people, and the future, if we’re to thrive as a species."
Yes, thank you. The word "relationship" seems the key point in your statement. At it's heart, this is really a philosophical problem more than a technical one.
The knowledge philosophy the modern world is built upon assumes that "more is better" when it comes to knowledge and power. This is a simplistic outdated philosophy left over from the 19th century and earlier, when we were unable to develop knowledge and power that could threaten the system as a whole. In that earlier era "more is better" made sense.
But we no longer live in that old era. That old era ended at 8:15 am on August 6, 1945 over Hiroshima Japan. Seventy five years ago. And we STILL don't get it.
And the fact that we still don't understand a new era that began before most of us were born does help illustrate the key point. Human beings are limited creatures, just like every other species on the planet. An open ended "more is better" relationship with knowledge and power is in direct conflict with the reality of our limitations.
Nobody can know exactly what our limitations might be, but by clinging to an outdated "more is better" knowledge philosophy from the past we are racing towards those limitations as fast as we possibly can.
Solutions? Sadly, there is really little evidence that we will be capable of understanding the nature of today's modern world through the processes of reason alone. The "more is better" knowledge philosophy tells such a happy self congratulatory story that we are unlikely to willingly give it up.
But, when reason proves itself insufficient, pain comes to the rescue.
What level of pain will be large enough to get our attention and cause us to seriously question our outdated knowledge philosophy, while still being small enough so as to not prevent learning and adaptation? Humanity might get lucky, and our education could be delivered in such a dose.
I like how you emphasize the need for breadth, depth, and integration in learning about the past of human relationships with technology in the Afterword. Interestingly enough, these are dimensions of polymathy, and this is what attracted me to your article – it appeared to be highly polymathic. In my book "Why Polymaths?", I arrive at the conclusion that the future is polymathic.
You might consider Rebecca Solnit's distinction between optimists, pessimists, and the hopeful.
> Hope locates itself in the premises that we don’t know what will happen and that in the spaciousness of uncertainty is room to act. When you recognise uncertainty, you recognise that you may be able to influence the outcomes – you alone or you in concert with a few dozen or several million others. Hope is an embrace of the unknown and the unknowable, an alternative to the certainty of both optimists and pessimists. Optimists think it will all be fine without our involvement; pessimists adopt the opposite position; both excuse themselves from acting. It is the belief that what we do matters even though how and when it may matter, who and what it may impact, are not things we can know beforehand. We may not, in fact, know them afterwards either, but they matter all the same, and history is full of people whose influence was most powerful after they were gone.
https://www.theguardian.com/books/2016/jul/15/rebecca-solnit-hope-in-the-dark-new-essay-embrace-unknown
Well, Substack’s algorithm has pointed me in the right direction: https://open.substack.com/pub/cybilxtheais/p/cybils-ai-poetry-1?r=2ar57s&utm_campaign=post&utm_medium=web
There's a group of us here on Substack who call ourselves Techno-pragmatists.
I thought about using something like that, or a techno-realist, but shied away from both as both of them lead down a conceptual alley that leads to thinking about a technology as something separate to our history of interdependent development. Still wide open to debate though.
I think it's a good term and not a cop-out. It's pragmatic and avoids a false binary. I like pragmatist over -realist because between all the AI and human hallucinations, I'm pragmatically unsure what's real.
We're already getting close to mainstream subsumption of technology into our bodies (it's already reached our conscious minds). So is it worth the effort to try and direct human and technological evolution? Might not sticking to our existing random walk (or crawl) remain sufficient, with Darwinian style winnowing of the least fit forms of government, technology, ethics, religion, etc? There is, after all, no particular hurry – or predefined goals – to achieve to get where we think we should be going. It would be a sorry form of existence if humanity converged on being solely concerned with 'throw me into more tech to grow me some more money'.
Have you talked to ASU colleague Brad Allenby much about this over the years? You have me recalling his work on the techno human condition with Dan Sarewitz!
Yep - lots : ) And good reminder of their work -- they were arguing something similar back in 2013, although with more of a focus on how technology and humanity are intertwining and changing the very essence of being human, something that is part of the landscape here that I barely brush against in this piece, but also deeply important
Where does one sign up to begin training for a 'pilot's license?
... somebody needs to build the program first :)
That sounds very much like the type of thing that would be exceptionally worthwhile to build.
Certainly something that's on our radar here