2 Comments

As ever, we will end up writing policy for the (fictional) world model we've constructed in our minds. For slow-moving/static endeavours, our mental models are generally pretty good reflections of reality. For fast-moving technology though, as you point out, this is less so.

I feel an even greater sense of concern as I reject the idea that progress will suddenly plateau (as fashionable, and palatable, as it is); AlphaProof/AlphaGeometry, o1, QwQ-32B, etc. are early exemplars of the power of flexible inference-time compute. This, alone, is a paradigm that still has room to scale. (That said, the ability to ascertain the difference between a machine with a human-equivalent IQ of 250, and 350, for the average human, is less obvious!)

Expand full comment

Yeah, I was finding it hard not to go off on a tangent around just how complex and entangled tech s-curves are, as a single curve is just an out of context snapshot from a whole ecosystem of entwining curves that, together, show a different type of behavior. And, of course, as you point out, there is strong disagreement around if and even when the current developments in AI will plateau out.

I decided to avoid the tangent as this would have made the piece too convoluted and detract from the point -- which of course becomes even more pointed if those capability and utility curves are still heading upward at an increasing rate and perception's still in the equivalent of the dark ages 🙂

Expand full comment