4 Comments

You write, "Over the past 20 years or so I’ve been working at the cutting edge of transformative technologies, and exploring ways of ensuring that the benefits substantially outweigh the risks as they are incorporated into society."

Don't forget that if any one risk is large enough, it contains the potential to erase all the benefits. It's not just a matter of benefits outweighing risks.

You write, "In each case, the communities of experts and practitioners I’ve worked with have learned that successful technology transitions take an incredible diversity of expertise, understanding, perspectives, and partnerships".

Or it could take something far simpler. Like slowing down. Or declining to develop some technology.

You write, "...and ensuring that the societal and economic benefits far out weigh the risks."

Apologies, but this is 19th century thinking, the notion that if benefits far out weigh risks, the problem is solved. This era of history ended in 1945.

You write, "If we are to successfully navigate technology transitions that include AI and AGI, as well as quantum technologies, advanced human augmentation, gene editing, the metaverse, and many more, we need greater coherence and relevance around how we combine existing understanding and generate new ideas."

And we might recognize this. Ever more, ever larger powers, delivered at an ever faster pace is a recipe for civilization collapse. The word "acceleration" should be sufficient to illustrate why such a process will inevitably outrun our management abilities.

https://www.tannytalk.com/p/the-logic-failure-at-the-heart-of

You write, "This wouldn’t matter so much if this was 30, 50, or even a hundred years ago, where the pace of innovation tended to allow enough time for things to even out in the end around risks and benefits (at least, to a certain extent). But this is 2023, and the pace of innovation is beginning to vastly outstrip our ability to find solutions to emerging problems using conventional thinking and established approaches."

Yes! Two examples of the needed unconventional thinking can be found here:

https://www.tannytalk.com/p/our-relationship-with-knowledge

and here:

https://www.tannytalk.com/p/world-peace-table-of-contents

You want to improve our management strategies, which is good. But you seem to not yet grasp that there is a limit to how much we can improve them.

Knowledge development feeds back upon itself, leading to an ever accelerating rate of knowledge development. If we were to plot that on a graph against human maturity development we'd see the lines diverging over time. Sooner or later they will diverge too far.

Two areas to look in to:

1) Learn how to take control of the pace of the knowledge explosion to better match our ability to adapt to change.

2) Put some focus on violent men, the single most dangerous factor in regards to emerging technologies of vast scale. If we don't meet this challenge, the rest of our work on this topic may be pointless.

Expand full comment

Thanks for the comments Phil.

I would argue that a risks & benefits framing still makes sense -- although there is a distribution here that is important, and at one end you have existential risks/black swan events that skew the framing. But the vast majority of practical decisions that need to be made have to come down to some form of balance -- and there are very few things where benefits of some form cannot be found

And on the decision to develop/not develop -- one argument here is that in a world of 8 billion people there are dynamics at work that mean that it's near-impossible to achieve a state where everyone agrees on a single course of action, or otherwise supports it.

Expand full comment

Hi Andrew,

Yes, pretty much every technology is developed because it can deliver benefits. Even nuclear weapons have the benefit of sobering the great powers.

I hear what you're saying about the vast majority of practical decisions. But all those decisions exist within a framework ruled by the worst case scenario. Get that wrong, and none of the other decisions really matter.

I hear you on the 8 billion people too. Learning how to control the pace of the knowledge explosion is a huge historic challenge, no doubt. My question is, is this civilization sustainable if we don't learn that? Is not learning it really an option?

Expand full comment

That, is the million dollar question!

Expand full comment