1 Comment

Maynard writes, "As AI development continues to accelerate, we cannot afford to get things wrong β€” if anything, the stakes here are far higher than they were with either nanotechnology or GMOs. But to do this, AI needs to learn from the lessons of the past if it’s to lead to a better future β€” and fast."

https://andrewmaynard.substack.com/p/responsible-ai-lessons-from-nanotechnology

The lesson we need to learn is that we're riding a knowledge explosion which generates powers of vast scale faster than we can learn how to safely manage those powers. It's this pattern which is the problem. Nuclear weapons, genetic engineering, artificial intelligence are here today, and more such vast powers we can't even yet imagine are coming as the 21st century continues to unfold. As the number and scale of such vast powers increases, the room for error shrinks, and the odds are increasingly against us.

The most constructive thing we could do is to stop talking about particular threatening technologies one by one by one, and shift our attention to the knowledge explosion itself, the process generating all the powers of vast scale. Focusing on particular threatening technologies is a loser's game, for the simple reason that they are being developed faster than we can learn how to manage them. It's only in learning how to take control of the process generating such powers that there is hope.

A primary obstacle in achieving such a shift of focus to the source of the problem seems to be that a "more is better" relationship with knowledge is a holy dogma of the science community, and it is the science community which holds most of the cultural authority in the modern world.

The science community sincerely wishes to make such emerging powers safe, but they don't seem able or willing to conceive of any approach which challenges the "more is better" philosophy at the heart of their enterprise. And so they keep creating powers of vast scale as fast as they can, and when things go wrong they then blame the problem on the public or politicians or anybody except their own stubborn addiction to an outdated 19th century knowledge philosophy.

Are human beings gods, creatures of unlimited ability? If we answer no, then it should become obvious that any process which generates new powers in an unlimited "more is better" and "faster is better" manner is on a collision course with the reality of the human condition. Whatever the limits of human ability might be, we are racing towards those limits as fast as we can. And because of the vast scale of powers being developed, when we do reach the limits of our ability, catastrophe may explode upon us much faster than we can effectively respond.

I'll begin to to take the science community seriously when I hear them talking about the possibility of pausing the development of all powers of vast scale. Yes, actually doing this would be very challenging. But talking about doing it is not at all difficult, and that's the place to start.

Science is supposed to be built upon respect for evidence. And here's the evidence we should be listening to today.

Nuclear weapons - we have no idea how to make them safe

Genetic engineering - we have no idea how to make this safe

Artificial intelligence - we have no idea how to make this safe

See the pattern?

If the science community is truly loyal to reason, they will acknowledge this pattern and respond to it. And that will require doing less science while we learn how to bring this dangerous pattern to an end. Doing less science for a time doesn't mean we stop learning. To the contrary, it means learning something new, necessary and huge, how to take control of the knowledge explosion.

If you're hearing frustration in my words, that's because I've lost faith in the science community's ability to even consider what's been described above. That virus of distrust will continue to spread throughout our society until the science community comes to terms with the new world it's success has created.

Expand full comment