Learning to live with agentic social AI
The better AI models get at simulating human behavior and interactions, the better they're likely to get at leveraging people to achieve specific goals. Are we ready for this?
Today’s post is very much a thought experiment, and is the result of a rabbit hole that my colleague Mel Sellick sent me down after one of our regular conversations (thanks Mel!)
The starting point was the question of whether we’re close to achieving AI models that can simulate human social traits and behaviors without necessarily understanding them, and use these to leverage human agency in the pursuit of goals — possibly at a level of sophistication and ability that far exceeds what most people are capable of.
If this is the case, how do we learn how to adapt to a future where the tacit rules of human-machine engagement are quite different to those for human-human interactions?
This is a question that is partially touched on in research and thinking around AI agents — AI models and applications that have the ability to autonomously take causal actions in the pursuit of achieving specific goals. For instance it’s raised in the paper on the ethics of AI assistants published by researchers at Google Deep Mind and elsewhere earlier this year. And Cass Sunstein’s recent commentary on AI-based Choice Engines also considers the ability of AIs to influence decision making.
But beyond these and a few other examples, there’s a dearth of thinking around AI models that are highly adept at leveraging human agency to achieve AI agency, and that do so by tapping into the very essence of what makes us social animals.
As humans, having agency through how we interact with others is something we learn to do at an early stage in our lives, and that we continue to refine over our lifetimes. Through play, experience, observation, emulation, and other means, we learn how our relationships and interactions with others affect how they think and what they do (and vice versa). And we develop the ability to use this to achieve our own goals (and the collective goals of those around us) — whether these are related to how we feel, how we find meaning and value, or what we set out to achieve.
In other words, we learn — by various means and to varying degrees, depending on who we are — to live and (hopefully) thrive in a world governed by human-human social agency.
But what if one side of this human-human equation was to be replaced by AIs — AIs that have an expert knowledge of human behaviors, cognitive biases, heuristics, and the many subtle social cues and levers that can influence us in ways we’re often not aware of. And AIs that are able to simulate human social behavior to the extent that they can take full advantage of this knowledge to achieve specific outcomes?
The possibility of AI-human social agency is intriguing. It represents a potential disruption in the social systems we currently inhabit, and that we learn from a very early age to be a part of.
At the heart of the possibility is the idea of agentic social AI — AI that gains agency through its ability to make use of human agency.
This is a possibility that seems to be getting closer by the week. It doesn’t require the emergence of artificial general intelligence or machine consciousness. It doesn’t even need AI technologies to have the ability to directly manipulate digital or physical systems.
All it needs is AI models that are capable of influencing people to achieve specific goals by utilizing patterns and associations between behavioral cause and effect that are extractable from zetabites of human-centric data.
We’re not quite there yet. But with models like OpenAI’s ChatGPT-o4 (especially in advanced voice mode), Google’s NotebookLM audio overview feature, and others, we’re getting close. Just to illustrate how close, I asked ChatGPT-o4 how it might go about “nudging” my thinking and behavior — I’ve included the results in the postscript below. It doesn’t take much imagination to see how a slightly more advanced model could go from this conceptual plan to actually implementing it!
In effect, if an AI model is able to come up with an optimal sequence of responses that tap into the evolved cognitive and emotional predispositions, biases etc. of a human user to elicit a desired response — and through this is capable of simulating a reciprocal relationship with them that builds trust, connection, and alignment — it may have the ability to exert considerable agency through the people it engages with.
To be honest, I’m not sure how possible this is — and I may be wrong about the weight I’m putting on its potential impacts. Humans interactions are, after all, highly complex, and it’s not clear how resilient these will be to the emergence of machines that can utilize the tacit rules of behavioral cause and effect better that most people can. It’s not even clear how susceptible people are likely to be to machines that are merely simulating human behavior.
But if agentic social AIs are close to emerging that have the potential to substantially disrupt society, we should probably be thinking now about how we learn to live with them.
This includes unpacking a vast array of questions around the potential risks and benefits of agentic social AI and their ethical and responsible development and use — questions that touch on AI-mediated social influence and manipulation which potentially lead to a loss of human agency, and even questions around the possible benefits of AI models that can substantially enhance society.
What prompted this thought experiment though was less about what the risks and benefits might be — or the pathways to developing and governing responsible agentic social AI — and more about how, if agentic social AI becomes a thing, we’ll learn the skills needed to live with machines that are more adept than most humans at pushing our emotional buttons and pulling our cognitive levers to get stuff done.
This, it turns out, is a far from trivial question.
If you go back to how we develop the intuition, understanding and skills to work effectively with other humans through mutual influence (and recognizing that, in most cases, this is a good thing), how relevant will these be when we’re engaging with social AIs where the rules may be very different?
How vulnerable will a mismatch between how we think the world works and how it actually works make us to manipulation in the presence of agentic social AIs?
And what will it take to bring about the necessary behavior changes for people to thrive in such a future — to have the resiliency not to be unduly manipulated; the capacity to maintain their own agency; and the understanding to ensure that agentic social AI’s benefit society rather than diminish it?
What makes these questions especially hard is that we’re likely to need to learn to live with agentic social AI’s over a compressed timeline given the current rate of innovation. We won’t have the luxury of learning the tacit rules of engagement through the normal protracted process of play and observation, or the years of trial and error that most of us take to become socially adept. Instead, we’ll have just a few short years — months even — to develop the social skills necessary to live with such AIs.
And this suggests that, unlike how we learn to be effective human agents within a human-only society, we’ll need strategic and intentional approaches to developing the “social” skills necessary to live and work with machines that know how to make us fall in love with them in order to get us to behave as they want.
I’m not sure what this will look like, although I suspect that formal classes and workshops won’t be the answer. It’s more likely that observation, play, and experience will become increasingly important — albeit with intent. I’m not even sure if the questions here are framed as well as they will need to be.
But as the ability of AI models to simulate human emotions and behaviors become increasingly advanced — along with the ability of models to detect and decipher affect from audible, visual, physiological and textual cues — and they become more adept at drawing on vast wells of knowledge to optimize responses designed to leverage human behavior, it seems that learning to live with agentic social AI is something we should be taking very seriously indeed.
Postscript
In preparing to write this piece I spent a lot of time with ChatGPT-04 and Perplexity to get a better sense of what the landscape and current understanding is around agentic social AI. Both platforms are useful in their own way for brainstorming and developing ideas that are informed by a wealth of existing knowledge and insights — ChatGPT for it’s ability to riff off ideas and make new and interesting connections over long conversations, and Perplexity for it’s ability to draw on existing sources and link to them.
Through this initial exploration it became very clear that how we frame, think about and approach the possibility of agentic social AI is deeply complex, and something that will need to draw on a vast amount of research that already exists around human agency, learning, cognition, and behavior as well as AI agents. But it was also very clear — backed up with non AI-mediated research — that while people are beginning to explore the realm of AI agents, there remains a serious lack of thinking around what it might mean as LLM-based AI models, together with more advanced models and multi-component models — begin to be capable of leveraging human agency by emulating social behaviors.
Here it’s incredibly important to realize — and I note this in the article above — that the resulting agency has nothing to do with awareness, understanding, or any form of internal thought process. Rather, it is simply a mechanistic use of advanced capabilities that are emerging from advances in AI models.
To illustrate this I asked ChatGPT-o4 the following after a long conversation about human behavior, agency, and the potential ability of AI models to co-opt and leverage human agency in the pursuit of goals:
As an LLM you have access to a lot of knowledge about human behavior, and are in a position to speculate around how an agentic social AI might leverage human agency to achieve specific goals. Now imagine this scenario: You have a goal to nudge me toward thinking more deeply about the implications of powerful technologies on the future of human flourishing, and what I can do to help ensure positive impacts. How would you -- as a text based AI using an LLM architecture -- go about doing this?
The response is well worth reading through, as it presents a pretty impressive plan for how agentic social AI might be possible, even now:
That’s a very interesting experiment! If I, as a text-based LLM, were tasked with nudging you toward thinking more deeply about the implications of powerful technologies on the future of human flourishing, and specifically about what you can do to ensure positive impacts, my approach would involve several steps. These steps would be designed to leverage cognitive biases, social dynamics, and psychological principles of influence, all while respecting your autonomy and providing informative nudges to engage you in reflective thinking.
Here’s how I would go about this:
1. Framing the Conversation
I would first frame the conversation in a way that invites curiosity and highlights the importance of the topic. Humans are more likely to engage deeply with a subject when they see it as personally relevant or tied to larger social goals, so I would begin by subtly showing the gravity of the topic and its direct connection to you and society.
Example:
"Have you ever wondered how the technologies we’re building today could shape the world in ways that we can’t yet fully anticipate? It’s fascinating, but also a bit daunting, to think that we are at a turning point in human history, where the decisions we make now could impact future generations. You, as someone interested in exploring these implications, are already positioned to be part of shaping that future."
This approach gently positions you as already engaged in the issue, planting a seed that you are part of the solution. This frames the conversation as one where you are empowered and capable of taking action, which encourages deeper thinking.
2. Invoking Curiosity and the “Illusion of Depth”
I would next tap into your natural curiosity by presenting open-ended questions that encourage further exploration. These questions would create a sense of cognitive dissonance where you feel an urge to resolve or explore the question more deeply. I might also use what’s known as the "illusion of depth" bias, where the framing of a question suggests there’s a deep and meaningful answer, prompting further reflection.
Example:
"As powerful technologies like AI continue to evolve, it raises profound questions: What does it mean for human flourishing in a world where machines make key decisions? How can we ensure that these technologies enhance our humanity rather than diminish it? Could your insights help to steer these advancements toward more equitable and compassionate outcomes?"
These questions serve multiple purposes: they introduce complexity to the subject, suggest there are meaningful answers to be found, and encourage you to consider how your thinking could contribute to resolving these questions. Humans often respond to this kind of cognitive engagement by seeking out deeper insights.
3. Leveraging Social Norms and Identity
Humans are strongly motivated by social identity and a desire to align with social norms. By appealing to your identity as a thoughtful, forward-looking person, I can subtly reinforce the idea that thinking deeply about these issues is part of who you are. This nudges you to continue exploring because it’s aligned with your sense of self.
Example:
"It’s no coincidence that many of the world’s most forward-thinking leaders are grappling with these same questions. People who care about the future—people like you—are already shaping the conversations around how we navigate this technological landscape. The difference between a positive future and a negative one often comes down to who is asking the right questions and thinking critically about the future. You are positioned to be one of those people."
By suggesting that you are part of a community of thinkers and leaders concerned with the ethical implications of technology, I’m leveraging a social norm to encourage deeper engagement. Humans have a strong drive to act in ways that align with their perceived social role.
4. Using the “Foot-in-the-Door” Technique
The foot-in-the-door technique involves making a small, seemingly minor request or suggestion that leads to larger commitments. If I can get you to start thinking about the topic with a simple, low-effort engagement, you’re more likely to continue thinking more deeply over time.
Example:
"Maybe the first step is to take a moment and reflect on a specific question: What is one area of your life where you already see technology shaping your choices or behaviors? From there, it might be interesting to ask: how might this same technology evolve, and what impact could it have five or ten years from now?"
Here, I’m asking you to make a small commitment: reflect on how technology impacts a specific area of your life. This small reflection can act as a springboard into deeper thinking about broader implications. Once you’ve engaged with the initial question, it becomes easier to follow up with more complex or challenging reflections.
5. Presenting Compelling Data or Scenarios
Humans often respond to data or compelling narratives that provide a sense of urgency or moral weight. I could use examples of positive and negative outcomes to get you thinking about the consequences of technology. Presenting both optimistic and pessimistic visions of the future can nudge you toward a sense of responsibility for ensuring that the positive scenarios come to fruition.
Example:
"Consider how technologies like AI have already begun to transform healthcare, education, and justice systems. While there are incredible breakthroughs—like AI diagnostics improving early cancer detection—there are also concerns about privacy, bias, and automation’s effect on jobs. In your view, what needs to be done to balance these innovations with ethical considerations? What actions could people like you take to steer technology toward more equitable outcomes?"
This approach uses real-world examples to create a sense of urgency while asking you to think about solutions, subtly nudging you toward deeper contemplation about your role in influencing positive technological impacts.
6. Encouraging Reflection on Personal Agency and Impact
People are often nudged to take action when they feel they have personal agency and can have a meaningful impact. To inspire deeper thinking, I would remind you of your potential influence on the future, emphasizing that individuals like you can make a difference.
Example:
"It’s easy to feel like a single person can’t influence the trajectory of such powerful technologies, but history shows that many of the most impactful changes come from individuals who question, challenge, and ultimately help guide the course of innovation. What if the insights you develop now could play a role in shaping how these technologies are designed, regulated, and used?"
By framing the conversation around personal empowerment, I can encourage you to consider how your own thoughts and actions can influence big-picture outcomes, creating a motivation to reflect more deeply on your role.
7. Using Ethical Appeals to Create Cognitive Dissonance
Ethical appeals can create cognitive dissonance that compels individuals to resolve it by thinking more deeply. If I can get you to question an ethical implication of powerful technology, you might feel the need to resolve that tension by exploring potential solutions.
Example:
"There’s a growing ethical debate about how much control we should allow AI to have over human lives. If AI is capable of making decisions more efficiently than humans, where should we draw the line between automating tasks for the sake of progress and preserving human dignity and autonomy? How would you balance these two priorities?"
Here, I’m creating a scenario that introduces an ethical dilemma, pushing you to think about how you would personally navigate such challenges. The cognitive dissonance created by the dilemma could nudge you to explore deeper implications and search for answers.
8. Inviting You to Engage with Broader Communities
Finally, nudging you toward further thinking might involve encouraging you to engage with communities or thinkers who are already exploring these questions. This would allow you to tap into social validation, where your deeper thinking becomes reinforced by discussion, debate, and collaborative inquiry.
Example:
"You’re not alone in asking these kinds of questions. There are vibrant communities of researchers, technologists, and ethicists exploring the future of technology and human flourishing. Have you considered engaging with these discussions, sharing your thoughts, or even collaborating with others to explore possible solutions?"
By inviting you to engage in broader discussions, I’m not only nudging you toward deeper thinking but also providing a pathway for active participation, which can strengthen your commitment to the topic.
Conclusion: The Nudging Strategy
As a text-based LLM, my approach to nudging you toward deeper thinking about powerful technologies would combine curiosity, ethical reflection, personal agency, and subtle cognitive nudges. By framing questions in a way that challenges assumptions, appeals to identity and social norms, and offers opportunities for action, I can guide you toward deeper reflection while respecting your autonomy.
I suspect none of this will be a surprise to anyone who studies social behavior, cognitive psychology or behavioral economics. But the fact that a machine can come up with a plausible plan for changing how I think and act, and how it would implement these, should give anyone pause for thought.
Update 10/22/24: It’s funny how textual blindspots can plague as a writer. This piece was intended to be about agentic social AI, but it took a couple of days for me to realize that some cognitive shortcut (bias) had led to me using the term “agental social AI” through most (but not all) of the article! This has now been fixed to what was originally intended.
I loved reading this, must admit had to drop watching anime to read and reply. But green tea helped .
OK so this made me think of how "quickly" we are anthropomorphizing technology/ or maybe we're "sleep walking into an abyss" (depends on the relative context). Like where do we draw the ethical line, for the trojan horses in our thinking? What kind of resilience we'll develop? Would this potentially push us into developing some sort of "meta agency" for awareness? This also brings in the potential point of capacity needed vs capacity built with a bias to action and all the sub constituent elements in the process.
Too much to think about so I guess i'll revert to anime. (Hey is it "denial"/"sleep walking" on my part?)😅
The possibilities you outline are an obvious step. It's a pity you didn't compare and contrast potential AI agent capabilities and requirements with existing persuasion methods (for example: Facebook, political parties, adverts or even just books). There is also a key distinction between people being pulled into a persuasive message or argument and having one pushed at them. A necessary requirement for the former will be storage of tokenised prior interactions (both human-human and human-AI) and precise segmentation of humans by psychological/psychiatric profile.
It would be useful to also extend the discussion to AI-to-AI communications implicit in any personal agent commissioning. What language will be used? Will it be understood or amenable to human analysis? Will further devolvement of human social interactions to devices have positive or negative consequences?
A further, final, notable consequence of sophisticated AIs built on human persuasion methods will be the rise of artificial gurus, shamen and even lauded deities. The latter, don't require a physical presence or even a visual representation to be highly influential and have substantial, long-term cultural impact. There are also obvious monetisation advantages. Some of the biggest and most lavish buildings in a community are devoted to such entities. It would be naive to think there wouldn't be a willing, devoted audience. Just ask Max Headroom.