Discussion about this post

User's avatar
Phil Tanny's avatar

The emergence of AI assistants seems helpful in illustrating a larger issue.

We're naturally concerned that AI assistants do more good than they do harm. While that's certainly a reasonable concern, it also illustrates a weakness in our thinking.

Trying to address and meet challenges presented by emerging technologies one by one seems a loser's game, because the knowledge explosion is producing new emerging technologies faster than we can figure out how to manage the one's we already have.

If that's true (reasonably debated) then we should face up to the fact that this process of trying to manage each new technology as it emerges is a path to a larger failure. It's not going to matter that much if we make this or that technology safe if an ever growing number of other technologies can not also be made safe.

Example: 75 years after the invention of nuclear weapons we still don't have a clue how to make them safe, and while we've been wondering about that genetic engineering and AI have emerged. And both of these new technologies will likely accelerate the knowledge explosion even further.

Particular emerging technologies are not the problem, they are symptoms of the problem. The problem is that we don't have control of an ever accelerating knowledge explosion.

We need fewer experts focused on particular technologies, and more experts capable of seeing the larger picture, because it is that larger picture which will determine our future.

Expand full comment
Johnathan Reid's avatar

Politics, wars, finance, statehood, religion and regional cultures. Will these be subsumed as the main drivers of human change?

Expand full comment
3 more comments...

No posts