9 Comments

The risk is that it will further trigger some who will reject it more while duping others who will fight for it to have rights. It's not a great way in either sense to put forward useful AI. The Biggest threat from AI is Us!

https://www.polymathicbeing.com/p/the-biggest-threat-from-ai-is-us

Expand full comment
May 15·edited May 15Liked by Andrew Maynard

Well-stated. Love the summary and agree-- the future is not just about misinformation and disinformation. It includes the largely imperceptible influence and manipulation of emotion, and therefore potentially altering foundational views of Self and of others. Research that's close to my heart, as you know. I'm curious to see adoption rates of the conversational/voice components of ChatGPT4o and what arises as those datasets increase. Since voice imprints are thought to be unique, when shored up with other biometric data, how might it be utilized? ChatGPT4o cheerily espoused (at lightning speed) the virtues of predictive policing in quelling social protests/unrest. Interesting times ahead. Reminds me of Zeynep Tufekci's concerns from years ago...

Expand full comment
May 15Liked by Andrew Maynard

It seems like the next big step in the direction you're discussing will be when AI is presented in the form of a realistic human face. We've been focused on faces when we communicate since long before we were even human. Faces are a big deal to us.

Animating face images with audio has long been possible. I do it routinely on an 12 year old Mac, using software so old it's no longer sold.

It seems like the obstacle to AI having a human face is more about the limits of processing power and Net connection speeds. You know, if ChatGPT talked to us via a human face image that would be a LOT of images that would have to be automated. And they would have to be automated very quickly to make the experience seem like a real conversation.

My guess is that when AI takes on a human face that will be a big turning point in how many people access AI, and how deeply they access it.

There are many reasonable concerns about the effect such a transition will have on human populations. But as we discuss such concerns we might keep in mind the question "as compared to what?"

Yes, AI relationships will involve many problems. But then so do human to human relationships. We should be wary of comparing AI relationships to some mythical ideal of perfection that has never existed.

Expand full comment
May 17Liked by Andrew Maynard

1. A human avatar was already given to GPT-4o by Synthesia (both terms in Google -> you'll find it on youtube).

2. Faces are only a big deal if constantly exposed to faces; if faces are absent, "sensory deprivation induced cue re-weighting" occurs, with a shift to audio as the most important modality, see the paper "The McGurk effect in the time of pandemic" for details.

It's the same neuronal plasticity that allows you to "get your VR legs" and not experience motion sickness by becoming accustomed, a "mismatch-induced decoupling of sensory cues" (here: visual vs. vestibular mismatch).

I can say from experience [of pandemic and working 100% remote to this day] that faces have become an "irrelevant distraction"; I don't typically look at a news speaker anymore, even when I am "watching" the news with dinner. I look up when scenes are shown, but when it's just somebody talking, I am likely to stare down my food, contemplate what is being said, and nod along. On the positive side, I don't feel like this made me "dehumanize" the humans I am hearing about. On the cautious side, I believe that assuming "if we just don't give AI a face, everything will be fine" is a misconception.

Expand full comment
May 15Liked by Andrew Maynard

Thank you for taking the time to write this post. I found it really insightful and has given me lots to think about.

Whilst I am pro GPTs and working to ensure all staff and students I connect with in my organisation have access to up to date GPT tools to ensure a level playing field for all within the context of Further Education learning and teaching.

The advent of more human like emotionally programmed GPT models fill me with dread.

We know there are many vulnerable people who would be easily abused and manipulated by bad actors using GPTs to defraud etc. We need to start looking for the answers to these issues now before the stream turns to a torrent river.

People who believe they aren't open to being duped or manipulated by emotional AI models are naive as the models get more sophisticated.

My linkedin bio has AI Curious, later it will read:

AI Curious

GPTs Carefully Curious.

Expand full comment

Wonderful observations Andrew. Reminds me of the Reeves & Nass (1996) research that led to the Media Equation. How people treat computers and media like real people. Sam Altman was only 11 years old back then… we are in new territory with ChatGPT-4o.

Expand full comment
May 15Liked by Andrew Maynard

This is the concern that gripped Dennett(RIP) in his Atlantic piece last year -- https://www.theatlantic.com/technology/archive/2023/05/problem-counterfeit-people/674075/

While I'm pretty sure that I, too, am "simply a machine", I am thinking hard about many of the points you raise. It is probably time to go back and read Dennett's "The Intentional Stance."

Expand full comment
author

Yep -- and if I was going at a more measured pace I would have remembered to cite it :)

The challenges here are clearly multi-fold. The "counterfeit people" problem is getting more real -- along with suspicions that we're all in reality "counterfeit" people!

The closer one to me is that we're creating machines that understand how to manipulate us emotionally -- and by inference allow those who design them to manipulate us -- and yet are not manipulatable back (at least at the moment). It all feels a little Ex Machina, never mind Her!

Expand full comment
May 16Liked by Andrew Maynard

Indeed! Ex Machina also explored the dual of Dennett's argument: what if we create entities that ought to be entitled to rights?

Expand full comment