Discussion about this post

User's avatar
Mark Daley's avatar

Well said! As a computer scientist, Ilya knows full well that absolute safety is *mathematically impossible* for any nontrivial definition of safety( https://noeticengines.substack.com/p/the-hard-problem-of-hard-alignment ). I respect him enormously, but it leaves a strange taste in my mouth to read a proclamation that, on the face of it, rejects the silicon valley "productize and ship everything" mentality in favour of a pure research mentality but cannot be read by anyone with a background in theoretical computer science as anything other than marketing copy.

Your position that this very important, but complex and nuanced, matter should be approached with humility, and in the context of the full breadth of existing intellectual frameworks on safety, is one with which I wholly agree.

Expand full comment
Phil Tanny's avatar

Whoa, great article, thanks Andrew.

As a thought experiment, we might imagine for a moment that Sustkever were to succeed in creating "safe superintelligence", however one might define that.

What happens next is that this new safe Super AI acts as an accelerant to an already overheated knowledge explosion. By "overheated" I mean, a process which produces new powers faster than we can figure out how to manage them.

As example, nuclear weapons were developed before most of us were born, and we still don't have a clue what to do about them. And while that puzzle eludes us, now we have genetic engineering and AI to worry about too, and we have no idea what to do about them either. And the knowledge explosion machinery is still running, developing new powers at arguably an ever accelerating rate.

What seems lacking from the safety equation is holistic thinking. The experts we look up to all want to focus on their particular area of specialization. Their primary interest is their career as experts. And so the focus of public discussion is almost always on this or that particular technology.

But does it really matter if AI is safe if the knowledge explosion AI will enhance produces other powers which aren't safe? How are we to be safe if the knowledge explosion continues to produce new powers faster than we can figure out how to manage them safely? Isn't a focus on particular technologies ultimately a loser's game?

What's happening is that humanity is standing at the end of the knowledge explosion assembly line trying to deal with ever more, ever larger powers, as they roll off the end of the assembly line faster and faster. This method of achieving safety is doomed to inevitable failure. If there is a solution, it's to get to the other end of the assembly line where the controls are, and slow the assembly line down so that we can keep up.

Instead, very bright people like Sustkever are using the controls to make the knowledge explosion assembly line go even faster.

Expand full comment
6 more comments...

No posts