Is Conscious AI Possible?
A new paper by Anil Seth argues that consciousness may not be achievable through current approaches to AI, but that the possibility of real or apparently conscious AI still raises important challenges
It seems that barely a week goes by these days where there isn’t some degree of speculation that the current crop of large language models, novel AI systems, or even the internet, are showing signs of consciousness.
These are usually met with a good dose of skepticism. But under the surface there’s often a surprisingly strong conflation between advanced artificial intelligence and conscious machines — to the extent that much of the current wave of AI acceleration being pushed by companies like OpenAI and Anthropic is underpinned by a belief that future human flourishing is depended on superintelligent machines that are likely to exhibit some form of conscious behavior.
Yet in a new paper, leading neuroscientist Anil Seth argues that consciousness may be a unique product of biology, and thus unreachable through current approaches to AI.
The arguments Seth makes are critically important to how advances in AI are framed and pursued, and to the responsible and ethical development of AI more broadly. They also call into question whether current efforts to achieve artificial general intelligence (AGI) and superintelligence are are, in fact, feasible — or even advisable.
Seth’s paper is as refreshing as it is necessary. In stark contrast to some of the hyperbolic and hubristic proclamations coming from proponents of advanced AI, Seth writes with clarity, authority, and — importantly — humility. He readily admits that his ideas may be wrong, and invites people to test and build on his thinking.
Yet his reasoning is compelling, and is grounded in a deep understanding of research and thinking on consciousness.
At the core of the paper is the possibility that consciousness is not simply a function of computational power, but that it is an emergent or embedded property of systems that are either biological, or are biological in nature.
If this is right — and Seth notes that there’s a high probability that it is, but not absolute certainty — it means that consciousness cannot arise simply from the software or algorithms that run within banks of GPUs or highly networked computers, or even on personal computers or mobile devices.
If true, this would place a rather large wrench in the ambitions of developers striving to create advanced AI simply by scaling compute capabilities using digital hardware.
Of course, many would probably claim that their aspirations to achieve superintelligent AIs do not depend on achieving consciousness in machines. But there comes a point where the types of agency, reasoning, and problem solving capabilities inherent in aspirational AI are hard to imagine without it occurring in some form.
Seth’s paper starts with the concept of “computational functionalism” — the idea that consciousness can arise from computational power alone, and not the platform (or substrate) on which computations are carried out. It’s a concept that leads to the assumption that “consciousness is only a matter of what a system does, not what it is made out of,” or what Seth refers to as “substrate neutrality.”
Computational functionalism is a widely held assumption amongst proponents of exponential progress toward AGI and superintelligence, and of a technological “singularity” where machines reach a point where they rapidly surpass human intelligence of their own volition.
Yet as Seth points out, everything we know about consciousness from studying biological systems strongly suggests that there is a deep dependency between what might crudely be considered the hardware (or “wetware” in the case of biology) of a system, and the software that we might think as running on it.
In fact even this analogy is flawed, as there are indications that biological phenomena like consciousness do not align with a hardware/software model that is rooted in modern computer science, and that this is merely an anthropocentric projection of what we have created onto the world we inhabit as we seek to understand and emulate it.
This doesn’t necessarily mean (at least in my interpretation) that consciousness is not a function of computation, but that the very definition of computation needs to incorporate the whole system, rather than distinguishing between the substate and the algorithms that run on it.
Seth goes on to explore in some depth how the current state of understanding points toward computational functionalism as not leading to consciousness, but rather points toward consciousness as being associated with biological mechanisms — or “biological naturalism.”
Here, Seth develops compelling arguments for the nature and even the evolutionary utility of consciousness in biology.
I would strongly recommend reading the paper itself here, as Seth develops his arguments with a clarity and insight that I can’t replicate in a short article. But I did want to highlight below a number of the ideas he puts forward that I found particularly intriguing.
The first is how the concept of consciousness is potentially wrapped up in the idea of simply “being alive” — and a biological imperative to “stay alive.”
As Seth writes, “[L]iving systems have ‘skin in the game’ … [t]hey are constantly engaged in processes of self-maintenance within and across multiple levels of organisation. The (higher level) mechanisms of predictive processing are, on this view, inextricably embedded in their (lower-level) biological context. This embedding is necessary for realizing the higher-level properties, and for explaining their connection to consciousness.”
From here Seth goes on to bring in the concept of autopoiesis and systems that “continually regenerate their own material components.” As Seth argues, human-made systems like machines (and AI) do not generally reproduce themselves — especially not across the multiple scales that are indicative of biological systems — but that biological systems where consciousness is observed do, to the extent that “there is really no way to separate what they are from what they do.”
From here, Seth connects the ideas of “being alive” and autopoeisis with the ability (and even the necessity) of biological systems to maintain themselves in a state that is out of thermodynamic equilibrium with their environment “in order to fend off the inevitable descent into disorder.”
I’ve long suspected that there is a strong thermodynamics argument to be made against the inevitable rise of superintelligent (and conscious) self-replicating and self-improving machines. And here, Seth places flesh on these suspicions.
In Seth’s words, “[t]o stay alive means to continually minimise entropy because there are many more ways of being mush than there are of being alive.”
Could this continuous state occur without consciousness, and with a separation of what something is from what it does? Seth acknowledges that it might be possible, but suggests that this is highly unlikely.
In other words, for something to be conscious, it is likely to either be so close to biology as to be indistinguishable from it, or to have the same deep integration of form and function that is seen across all scales within biology — neither of which is feasible with digital computing or substrate neutrality.
While this points toward consciousness as not being something we need to worry too much about in AI systems (and I’ve seriously truncated his arguments here — another reason to go read the original paper), Seth does end on a note of caution.
Here, he starts with the dangers of striving to create conscious machines — or “real artificial consciousness” — despite the seeming improbability of them being possible. As he writes, “[r]eal artificial consciousness may be very unlikely or impossible. However, if such systems were to emerge — whether by design or accident — we would risk an unprecedented ethical catastrophe.”
I firmly agree. As Seth points out, creating artificial consciousness risks also creating new forms of suffering — and forms that we might not even recognize as suffering. This is particularly problematic where the subjects of such suffering are categorize as “objects” and “possessions” that are irrelevant to moral and ethical discourse around rights and responsibilities.
Here, Seth states that “[t]he development of real artificial consciousness for its own sake should not be an explicit goal. This might seem obvious, but is is surprising how often ‘conscious AI’ is treated as a holy grail of AI research … The fact that real artificial consciousness may be impossible on current trajectories does not license us to blindly pursue it in the service of some poorly thought-through techno-rapture.”
While Seth considers the possibility of real artificial consciousness as remote, he places the likelihood of conscious-seeming AI as substantially higher: “In contrast to the implausibility and uncertainty surrounding real artificial consciousness, it is almost certain that AI systems will soon give us compelling impressions of being conscious.”
This, he argues, comes with its own ethical challenges. In particular he worries that the current batch of large language models and more advanced AI systems will lead to “cognitively impenetrable illusions of consciousness” that will make it near-impossible for us to treat them as if they are not conscious.
In other words, we may rationally understand that an AI is not conscious, but be instinctively incapable of acting on this knowledge.
This has profound implications to how we individually and collectively engage with emerging AI systems and how the relationships that result from this impact us in turn — even to the extent to which we become vulnerable to manipulation by AI.
Here, Seth is concerned that “AI that appears to be conscious is well placed to exploit our psychological vulnerabilities, whether by accident or by (corporate) design.” It’s a concern that I’ve previously highlighted in the book Films from the Future while discussing the film Ex Machina. And it’s one that I think is even more relevant now.
Yet despite this possibility, the way forward to either developing conscious-seeming AI responsibly, or avoiding such a scenario completely, is far from clear — especially as the emergence of such systems seems inevitable (and one which some would argue is already here).
As companies like OpenAI, Anthropic and others engage in a race to produce ever-more human-like AI — with very little governance overseeing the subtler potential social implications of emerging systems that we can’t help but respond to as if they are conscious — it seems inevitable that they will become an integral part of many people’s lives.
As they do, there are likely to be substantial benefits around productivity, learning, and even health and wellbeing, that come from engaging with conscious-seeming AI. It could even be argued that the benefits that many people are projecting will come about because these systems act as if they are conscious and aware.
Yet what will the societal and personal price be of opening our lives to technologies that have these capabilities?
It’s a question Seth raises with some seriousness. But it’s also one that will require a level of society-wide discussion and action around responsible AI development that we have yet to see.
Unlike many of the hubristic statements and analyses coming out of developers and communities caught up in technocentric visions of an AI future, this is a paper that I would consider essential reading for anyone who truly wants to be a part of the emergence of societally beneficial AI. It raises important questions and opens up critical conversations while being deeply grounded in the state of the art, and reflecting a humility that is essential to good science and responsible progress.
You may not agree with all the arguments made or the inferences drawn. But that’s part of the point — this is a conversation starter around not making serious mistakes because we’re enamored with the idea that, when playing with the future, it’s OK to simply go fast and break things because we’re building something new.
To press this home, I thought it only fitting to end with Seth’s final paragraph:
“Language models and generative deepfakes will have their day, but AI is just getting going. We do not know how to guarantee avoiding creating real artificial consciousness, and conscious-seeming systems seem inevitable. When in the early 1800s, at the age of 19, Mary Shelley wrote her masterpiece Frankenstein, she provided a cautionary moral tale for the ages. The book is often read as a warning against the hubris of creating life. But the real Promethean sin of Frankenstein was not endowing his creature with life, but with consciousness. It was the capacity of his creation to endure, to suffer, to feel jealousy and misery and anger that gave rise to the horror and the carnage. We should not make the same mistake, whether in reality, or even in appearance.”
Anil Seth, Conscious artificial intelligence and biological naturalism. PsyArXiv Preprints, June 20, 2024. DOI: 10.31234/osf.io/tz6an
Great overview of Anil Seth’s paper. For me, the point is not whether AI ever becomes sentient. It’s when humans believe that it is... and unconsciously treat it like it is... that society will be altered. I said this a few weeks ago to some of the U.S.’s top STEM scholars at a conference. A few were curious, one shrugged his shoulders, and another said, “wow… humans probably are really that stupid.”
But let's be fair-- it's a matter of vulnerability, not stupidity. Big Tech utilizes persuasive psychology and neuroscience in every aspect of design, while the average person has little clue (or ability) to counter the most basic of manipulations. Training future scientists to think about philosophical and psychological consequences of tech they create is essential--- yet it's often not required in higher ed. It should be.
(P.S. In my interactions with Anil years ago, he was always incredibly generous and humble. He turned me on to Andy Clark's extended mind thesis back in the day...:-)
One non-expert thought. If our consciousness is unreproducible because somehow Quantum, can the unpredictable future paths of Quantum computers possibly reopen new possibilities of reproducing consciousness? If not, why not?...