11 Comments
2 hrs agoLiked by Andrew Maynard

Out of curiosity I asked ChatGPT copilot and Gemini your exact same Q. Gemini’s response was too brief, unworthy of mention. Copilot’s similar with ChatGPT and different from its response to you. The distinction I reckon is due to differences in domain expertise, style and contextual knowledge.

I went on to upload the response you received and asked reasons for the differences. Chstgpt reasoned that yours is for academic research hence it shifts toward a formal, research-based tone, acknowledging your domain knowledge and potential academic curiosity. The key difference lies in audience adaptation. It shapes the content, depth, and tone to match the person asking the question and the context in which they ask.

The other aspect to consider in understanding AI’s ability to influence decision making would then be that it would be as good as it gets? Confining to the cognitive and knowledge level of users hence limiting the users ability to advance and make better decisions?

Expand full comment
author

Interesting -- yes, context is really important I think, as is the prior conversation that precedes the question (this is also an indicator of potential social agency as LLMs adapt to who they are engaging with).

And interesting question on whether this is as good as it gets -- I suspect not, as I think we've barely scratched the surface of what we might call AI nudging, persuasion, or even social engineering!

Expand full comment
4 hrs agoLiked by Andrew Maynard

Why is the onus on us as humans to develop the skills to work with AI agents? The AI agents need to be properly socialized the way we humans are taught to respect other humans and not try to trick them or lie to them. If AI agents do not develop sufficient social and ethical intelligence, there will be a rapid and strong rejection of the technology. Industry should think very carefully about this, because my guess is that we have only one opportunity per generation to get this right.

Expand full comment
author

I suspect this is a "yes and" scenario -- yes, the ideal is that we develop human-centered technologies that support rather than challenge human values etc. At the same time I'm struggling to think of any scenarios where we've been able to achieve this as a society, simply because the dynamics of innovation -- the economics of innovation even -- tend to lead to technologies that we end up learning to assimilate and live with.

Although that doesn't mean we shouldn't try to do things differently

Expand full comment
4 hrs agoLiked by Andrew Maynard

Yes, and there is always the adversarial/criminal application of technology. I enjoyed the essay; thanks!

Expand full comment

Andrew: TY for the good article. My main complaint is that you offered no defense to AI!

IMO the most powerful defense we have is the ability to do Critical Thinking. However, Critical thinking is not only NOT being taught where it should be (K-12, esp in Science), but actually the opposite is being heavily promoted. What is the opposite? Conformity...

For those interested in more, please subscribe to my free Substack on Critical Thinking <https://criticallythinking.substack.com>.

Expand full comment
author

Thanks John. You're right, I didn't :) Mainly because I wanted to open up a conversation rather than offer solutions where I don't even think we understand how to formulate the problem yet -- but this is exactly where I hoped that people would jump into the conversation!

Thanks for the link to the substack. Interestingly critical thinking is something that most educators -- certainly in higher education, but also I suspect in K-12 -- would say is critically important. And I know that it's heavily emphasized in many of the programs that I'm involved with. But I think we also have challenges around what we mean by critical thinking, how this is used, and how it leads to positive outcomes.

This becomes really important in the context of system 1 and system 2 thinking -- and the realization that the vast majority of the actions we take and decisions we make do not -- by biological design -- employ critical thinking. Case in point -- when engaging with an administrator over getting routine stuff done, most people will not, and cannot afford to, think consciously and step by step through every aspect of the what they are doing and why.

I would still argue that critical thinking is important, but I also think we need to think more deeply by what we mean by this.

Expand full comment

I loved reading this, must admit had to drop watching anime to read and reply. But green tea helped .

OK so this made me think of how "quickly" we are anthropomorphizing technology/ or maybe we're "sleep walking into an abyss" (depends on the relative context). Like where do we draw the ethical line, for the trojan horses in our thinking? What kind of resilience we'll develop? Would this potentially push us into developing some sort of "meta agency" for awareness? This also brings in the potential point of capacity needed vs capacity built with a bias to action and all the sub constituent elements in the process.

Too much to think about so I guess i'll revert to anime. (Hey is it "denial"/"sleep walking" on my part?)😅

Expand full comment
author

Thanks Faizan! This is, of course, a massive question/concern at the. moment. But I also wonder if there's something more complex here than anthropomorphism, and more along the lines of how actors of various sorts engage and mutually influence each other in complex networks. I almost touched on Actor Network Theory in the piece, but didn't as this would begin turning it into a thesis!

Expand full comment
12 hrs ago·edited 4 hrs agoLiked by Andrew Maynard

The possibilities you outline are an obvious step. It's a pity you didn't compare and contrast potential AI agent capabilities and requirements with existing persuasion methods (for example: Facebook, political parties, adverts or even just books). There is also a key distinction between people being pulled into a persuasive message or argument and having one pushed at them. A necessary requirement for the former will be storage of tokenised prior interactions (both human-human and human-AI) and precise segmentation of humans by psychological/psychiatric profile.

It would be useful to also extend the discussion to AI-to-AI communications implicit in any personal agent commissioning. What language will be used? Will it be understood or amenable to human analysis? Will further devolvement of human social interactions to devices have positive or negative consequences?

A further, final, notable consequence of sophisticated AIs built on human persuasion methods will be the rise of artificial gurus, shamen and even lauded deities. The latter, don't require a physical presence or even a visual representation to be highly influential and have substantial, long-term cultural impact. There are also obvious monetisation advantages. Some of the biggest and most lavish buildings in a community are devoted to such entities. It would be naive to think there wouldn't be a willing, devoted audience. Just ask Max Headroom.

Expand full comment
author

Really appreciate pushing the thinking here Jonathan -- which is what I hoped to spark with the piece :)

Obviously persuasion is a critical part of the puzzle here, but I think it's also just one of many dimensions associated with the concept of agental social AI (and one reason why I don't explicitly call it out). Part of the reasoning here is that in complex social networks influence is usually more than just persuasion -- you have alignments, cooperation, agreements, common goals, shared beliefs and a whole host of other things that impact the dynamic, although these are also all potential tools that can be used to influence people.

And great reference to Max Headroom :)

Expand full comment