3 Comments

Let's imagine for a moment that all the risks presented by AI were somehow successfully met, and AI could then confidently be declared to no longer be a threat. This may not matter, because the knowledge explosion machinery that created AI will continue to generate ever more, ever larger powers, at what seems an ever accelerating pace.

Instead of focusing on particular emerging technologies one by one by one, we should be focused on the knowledge explosion assembly line which is producing all these emerging powers. If we fail to do that, the knowledge explosion will continue to generate new threats faster than we can figure out how to meet them.

As example, nuclear weapons are the biggest threat we currently face, and after 75 years we still don't have the slightest clue how to get rid of them. And while we've been puzzling over that, the knowledge explosion has produced AI and genetic engineering, which we also don't know how to make safe.

And the 21st century is still young. More and more and more is coming. AI is not the end of what's coming, but only the beginning. Think back a century to 1923. In 1923 they couldn't even imagine many of the technologies that were to come throughout the rest of the 20th century. That's where we are today too.

If we don't learn how to take control of the knowledge explosion, all the hand wringing about threats presented by particular technologies like AI may be pointless, because it won't matter if we solve AI if some other technology crashes the system.

Focusing on particular technologies one by one by one is a loser's game. Until we understand this, we are Thelma and Louise racing towards the cliff.

Expand full comment

Great perspective, and I am one of the signatories who shares your view of the key threats being well down the chain from total anihilation. I particularly appreciate this point of yours - that there are "potential catastrophic risks of not developing new AI-based technologies." Would love to know what you think of my call for an IPAI - Intergovernmental Panel on AI akin to the IPCC for climate change: https://revkin.substack.com/p/im-with-the-experts-warning-that

Expand full comment

Thanks Andy -- I'll leave a comment over on your post as well, but my sense at this point is that a) we need international coordination and collaboration; b) this needs to have some form/degree of authority; c) I have no idea what this will look like, and am not yet convinced that existing models will work; and d) this means that we need a rich pool of ideas and dialogue (and fast) -- the IPAI concept may well be one that works, but I also worry that it will be too slow, not sufficiently grounded in multi-stakeholder dialogue and decision-making, and too prone to pushback from various groups. But the bottom line is that we need something!

Expand full comment