2 Comments

You write, "there are growing concerns that these are just the beginnings of a deeply complex AI technology transition that will demand radically new thinking around governance and regulation."

Yes, and we are no where near close to being willing and able to do radically new thinking.

You write, "Both papers reflect a growing sophistication around how leading experts are beginning to think about AI governance..."

"Growing sophistication" is no where near adequate, but it sounds good on one's resume.

One of the angles that requires examination is the question of whether experts are in a position to do the kind of "radical new thinking" that is needed. I'm referring here to their position in the expert business. How far beyond the group consensus can they afford to travel? How much can one afford to rock the boat and explore beyond "realistic" and "reasonable", if one's income and family are dependent on one's reputation?

And why should we consider them experts if on one day they are warning of existential risk, and the next day they go back to their offices to push the field forward as fast as possible?

You write, "the paper’s authors propose a governance approach that relies heavily on self-regulation and consensus-based regulation which includes engaging with key stakeholders — including civil society — and that is overseen by government regulation."

This is the kind of vague word salad that they want us to pay them for.

But, to simplify the topic, let's assume the experts succeed at making AI entirely safe. That's not possible, but let's assume it anyway.

The real threat, which I rarely see mentioned unless I write it, is that even safe AI will be more fuel pored on an already overheated knowledge explosion. That process will produce ever more, ever larger powers, at what's likely to be an ever accelerating rate. All those emerging powers will have to be made safe too. And we have no credible record of making powers of vast scale safe.

Here's what I tried to share with Gary Marcus, a smart expert with the best of intentions. I post versions of this same message routinely on his blog. I can detect no interest.

https://garymarcus.substack.com/p/jumpstarting-ai-governance/comment/18269305

Expand full comment

Bio-logicals being Eco-logical has always been the 'future'. An apple (Newton) is a useful bio-logical model, but an orchard is a poor model of "diversity=stability". Jung's rhizome (ginger?) might be a more useful latent space model for geospatial prompted terminators (GPT8u)?

Expand full comment