Navigating Advanced Technology Transitions
We need new thinking, research, ideas, and strategies, for ensuring advanced technologies are harnessed to ensure a better future for as many people as possible. And we need them fast!
This is, by any measure, a procrastination post. I’m supposed to be writing a commentary on advanced technology transitions for a prominent journal, and I’m stalling.
It’s not that I don’t know where to begin — I know where I want to go with this commentary, and what I want to say in it. It’s just that, as usual, I’m struggling with that first step of cutting through the cerebral fog as I get started on a more formal piece of writing.
So I thought I would start here and hopefully dissipate some of that fog.
Over the past 20 years or so I’ve been working at the cutting edge of transformative technologies, and exploring ways of ensuring that the benefits substantially outweigh the risks as they are incorporated into society.
In each case, the communities of experts and practitioners I’ve worked with have learned that successful technology transitions take an incredible diversity of expertise, understanding, perspectives, and partnerships. Yet it often feels that, with each new wave of transformative technologies, there’s a tendency to metaphorically re-invent the wheel as different groups and communities try to ensure their success.
This is now playing out with AI and Large Language Models (LLMs) like ChatGPT, where well-meaning advocates for societally beneficial innovation seem blissfully unaware of lessons learned from past technology transitions, and how to harness and build on these.
This, perhaps, isn’t surprising. In a world where we’re deluged with new knowledge, information, ideas, and perspectives on a daily basis, it’s near-impossible to cut through the noise and join the dots in coherent and informed ways. But it’s also not helped by a lack of coordinated research, learning, and thought leadership, around advanced technology transitions that is highly visible, highly accessible, and highly relevant to navigating emerging challenges.
It’s not that the pieces of the puzzle aren’t there — many of them are. There’s a wealth of expertise around transdisciplinary and multi-sector approaches to mapping out the challenges presented by new technologies, and ensuring that the societal and economic benefits far out weigh the risks. Yet much of this is buried in academic research, or within disconnected communities and initiatives.
If we are to successfully navigate technology transitions that include AI and AGI, as well as quantum technologies, advanced human augmentation, gene editing, the metaverse, and many more, we need greater coherence and relevance around how we combine existing understanding and generate new ideas. And we need to make sure that this is highly legible and accessible to people who increasingly need guidance and insight into navigating technology transitions that could end badly if not handled well.
This is where the concept of Advanced Technology Transitions makes sense as an integrated and transdisciplinary focus of research and scholarship, as well as a domain of thought leadership; an area of critical education, learning, and skills development; and a platform for mobilizing expertise for positive societal and economic impact.
So far, none of this level of coherence has been visible around LLMs. In fact, we’ve seen a plethora of well intentioned, although sometimes uninformed, ideas circulating—with decision-influencing conversations being driven by advocates who often have a very rudimentary understanding of complex technology transitions.
This wouldn’t matter so much if this was 30, 50, or even a hundred years ago, where the pace of innovation tended to allow enough time for things to even out in the end around risks and benefits (at least, to a certain extent). But this is 2023, and the pace of innovation is beginning to vastly outstrip our ability to find solutions to emerging problems using conventional thinking and established approaches.
Instead, we need unconventional approaches to unconventional challenges if we’re to successfully navigate advanced technology transitions. This will depend on new and joined-up ways of thinking that are not constrained by outmoded ideas or disciplines, but rather combine knowledge, ideas and insights in transformative ways—including ensuring that the arts and humanities (as well as areas of understanding that don’t have neat disciplinary labels) are as much a part of the mix as the social sciences, and science and engineering more broadly. It will also mean ensuring that relevant expertise and understanding are accessible, legible, and meaningful, to those who could benefit from them. And it will require having the agility to respond to new challenges with speed.
Of course, achieving this will not be easy. But we can start by recognizing that there is a need for developing expertise in advanced technology transitions, and incorporating this into how we navigate emerging technologies at all levels.
Beyond this, there needs to be strategic investment in new initiatives and communities that focus on advanced technology transitions, and mechanisms to ensure that these have the societal and economic relevance and impact that’s needed.
But before we get there, maybe the first step is simply recognizing that we are faced with navigating technology transitions that are unlike anything we’ve had to grapple with before, and that success is going to depend on shaking up how we approach the increasingly convoluted and complex intersection of technology, society, and the future.
And, maybe, framing this challenge through Advanced Technology Transitions will help us achieve this.
You write, "Over the past 20 years or so I’ve been working at the cutting edge of transformative technologies, and exploring ways of ensuring that the benefits substantially outweigh the risks as they are incorporated into society."
Don't forget that if any one risk is large enough, it contains the potential to erase all the benefits. It's not just a matter of benefits outweighing risks.
You write, "In each case, the communities of experts and practitioners I’ve worked with have learned that successful technology transitions take an incredible diversity of expertise, understanding, perspectives, and partnerships".
Or it could take something far simpler. Like slowing down. Or declining to develop some technology.
You write, "...and ensuring that the societal and economic benefits far out weigh the risks."
Apologies, but this is 19th century thinking, the notion that if benefits far out weigh risks, the problem is solved. This era of history ended in 1945.
You write, "If we are to successfully navigate technology transitions that include AI and AGI, as well as quantum technologies, advanced human augmentation, gene editing, the metaverse, and many more, we need greater coherence and relevance around how we combine existing understanding and generate new ideas."
And we might recognize this. Ever more, ever larger powers, delivered at an ever faster pace is a recipe for civilization collapse. The word "acceleration" should be sufficient to illustrate why such a process will inevitably outrun our management abilities.
https://www.tannytalk.com/p/the-logic-failure-at-the-heart-of
You write, "This wouldn’t matter so much if this was 30, 50, or even a hundred years ago, where the pace of innovation tended to allow enough time for things to even out in the end around risks and benefits (at least, to a certain extent). But this is 2023, and the pace of innovation is beginning to vastly outstrip our ability to find solutions to emerging problems using conventional thinking and established approaches."
Yes! Two examples of the needed unconventional thinking can be found here:
https://www.tannytalk.com/p/our-relationship-with-knowledge
and here:
https://www.tannytalk.com/p/world-peace-table-of-contents
You want to improve our management strategies, which is good. But you seem to not yet grasp that there is a limit to how much we can improve them.
Knowledge development feeds back upon itself, leading to an ever accelerating rate of knowledge development. If we were to plot that on a graph against human maturity development we'd see the lines diverging over time. Sooner or later they will diverge too far.
Two areas to look in to:
1) Learn how to take control of the pace of the knowledge explosion to better match our ability to adapt to change.
2) Put some focus on violent men, the single most dangerous factor in regards to emerging technologies of vast scale. If we don't meet this challenge, the rest of our work on this topic may be pointless.