3 Comments

You write, "What worries me just as much though is that nothing about how we think, how we plan for the future, or how we develop approaches to ensuring better futures, is geared toward exponential advances that happen over months rather than years."

Good article, thanks! Let's use it as a launch pad for a wider more holistic discussion.

For the purposes of this comment, let's do a thought experiment and imagine that either AI goes away completely, or turns out not to be any kind of threat.

That happy AI outcome wouldn't really solve anything. The knowledge explosion would keep right on roaring ahead, arguably at an ever accelerating pace. To illustrate, the biggest most immediate threat to the modern world today is nuclear weapons, which were developed and mass produced decades before the AI era.

Everybody wants to be an expert, and focus their attention on some particular technology. I get the point of this for business reasons, but the result is that everybody winds up applying their intelligence to details, instead of the big picture, the knowledge explosion as a whole.

Avoiding an existential crisis in the modern world doesn't depend on successfully managing any one particular technology, it depends on successfully managing ALL technologies of that scale. That's not going to be possible so long as we ignore the underlying process generating all the new technologies.

As example, nuclear weapons were first developed in 1945, and we still really have no idea how to get rid of them. And while we're scratching our heads about that, other new advanced existential scale technologies like AI and genetic engineering have emerged, which we also have no idea how to make safe. That is, the knowledge explosion is producing new threats faster than we can figure out how to manage them.

So long as that equation remains in place, it doesn't really matter what happens with any one particular existential scale technology. As the number of these threatening technologies continues to grow, if one of them doesn't bring down the modern world, another one sooner or later will.

Think of the knowledge explosion as an assembly line, which is producing new large powers at a faster and faster pace. Standing at the end of the assembly line trying to deal with the new powers one by one by one as they roll off the end of the assembly line is a backwards way to think about this threat. Sooner or later that accelerating process will overwhelm us.

What's needed instead is to get to the head of the assembly line, and take control of the process. What's needed is to shift our focus from this or that technology, to the knowledge explosion process which is generating all the technologies.

Fundamentally, this is not really a technical problem, but a philosophical one. Our culture as a whole, led by the "science clergy", is conceptually trapped in a "more is better" relationship with knowledge, a 19th century knowledge philosophy, which used to make perfect sense, but which is now dangerously outdated. The technology races ahead, while our thinking about technology remains trapped in the past.

If we're not willing to examine that outdated knowledge philosophy, there's not really much point in writing about AI or any other existential scale technology, because sooner or later one of them is going to bring the house down, and the problem will be solved that way.

Expand full comment

This is a long post! I also felt the impulse to write about AI 2027 but as a layperson - it's hard to parse out the details given some of the tech jargon. It's becoming more and more clear to me that there is a major divide between the subset of people who can talk more fluently about the technical aspects and arguments surrounding the AGI debate vs. the rest of us, especially teachers in the humanities, who have to rely on banal and imperfect analogies and metaphors to try to describe a technology that they fundamentally do not understand. Another added layer to the difficulty of discussing the impact of AI in schools ...

Expand full comment

Yep -- I almost didn't post the report in the article, but then it would be even less accessible to many people (simply because most readers will not click a link or download a document).

And yes -- there needs to be better translation ... urgently! I would say that this is one of the ways that working with an AI you're comfortable with can be very helpful

Expand full comment