6 Comments

Wait a second. Isn't everyone assuming without any kind of proof that Altman is guilty? If Altman did indeed clone Johansson's voice after being told twice not to, then ok, let's roast him. But doesn't Altman claim he cloned another actor's voice that RESEMBLES Johansson's voice. I don't know the truth, but it seems neither do any of the rest of us. Maybe we all ought to calm down until we do? Anyway, moving on....

Just finished watching Her for the first time last night. Wow, what a prophetic masterpiece, I was quite impressed.

And it changed how I think about AI. I've been claiming that MANY people will embrace digital friends because AI doesn't require negotiation and compromise. Which is true NOW. The movie Her made me realize it's unlikely to stay that way.

In it's current form AI might be compared to a dog, an obedient servant who is eager to please and submit to our will. But as the film explored, AI is likely to evolve beyond it's dog beginnings and become a much more complex creature over time, less like a brainless servant, and more like us. Complicated, self contradictory, demanding, with it's own needs etc.

It might be useful to reference how we evolved from apes. We're still very ape-like in very many ways (see the documentary Chimp Empire) but then there's another level layered on top of our apeness which makes us distinct from apes, a layer which apes can't understand. Thanks to the film, I now conceive of the future of AI more like that. Isn't it true that we already don't really understand how LLMs work?

Andrew, you ask good questions in your piece. I've begun to question the relevance of all these kinds of questions all of us are asking, because they seem to be based on an assumption that we have some control over where AI is going. Is that true?

You, and many others, have asked: "Just how appropriate is it to aspire to create AI agents that are designed to encourage personal and even romantic connections with users?

If such AI agents are going to happen no matter how we answer that question, is the question still meaningful? Are we sort of asking "what should the weather be tomorrow", as if we have a say in the matter?

You write, "Advanced AI agents aren’t going away any time soon, and are likely to become increasingly ubiquitous (as recent announcements from Google and Microsoft suggest). But as they do, let’s hope that there is true responsibility here to how they potentially impact people..."

It seems the entire field of AI is sort of trapped in a wishful fantasy that AI alignment is possible, that we can somehow control where this is going. To debunk that myth, we might imagine what would happen if Silicone Valley and all other developers in the West were to abandon AI. Here's my prediction...

If a significant number of people want features like a digital friend/lover, and are willing to pay for it, somebody on the planet will provide that service. There is no global authority which can enforce a uniform policy on all global actors. The US and EU have jurisdiction over only about 10% of the world's population.

If the above is true, then perhaps we should all be thinking more like animals, in the sense of adapting to our environment, to that which we can not change.

Expand full comment

I'm pretty sure they didn't clone Johansson's voice, but rather mimicked its essence -- which is why a lawsuit will be both challenging and interesting.

The film, as you say, provides a really interesting perspective! I'm not sure we'll either get to OS sentience any time soon, or a single instance that engages with many people simultaneously (although the latter is more feasible). But it does raise questions as to where we're actually heading ...

Plus, Her just underlines how stimulating sci fi movies can be when thinking about tech and the future!

Expand full comment

I've said this before on many occasions but humans interpret intelligence based on language. We interpret emotions based on language. We attribute sentience based on language and ChatGPT and other LLMs are.... sophisticated language models. Add in a sultry feminine voice with proper tone and inflection and humans are bound to hallucinate a capability, personality, cognizance and sentience well beyond the underlying stochastic parrot models.

Expand full comment

Yep!

Expand full comment

I'm afraid your question about the appropriateness of using these new voice features to "create personal and romantic relationships with users" is answered NO by society but a resounding YES by edtech companies. Why? Well, the social media playbook worked well to grab and hold our attention. The new playbook will be to grab and hold our hearts and mind. The new way to create "stickiness" is to form human-like relationships with "bots". It seem inevitable unless there is some regulation against it. I'm afraid we will look back on social media as mild in comparison to what AI will do with bots.

Expand full comment

Now you're scaring me!

Expand full comment