9 Comments
Jul 1Liked by Andrew Maynard

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Expand full comment
Jul 1·edited Jul 1Liked by Andrew Maynard

Great overview of Anil Seth’s paper. For me, the point is not whether AI ever becomes sentient. It’s when humans believe that it is... and unconsciously treat it like it is... that society will be altered. I said this a few weeks ago to some of the U.S.’s top STEM scholars at a conference. A few were curious, one shrugged his shoulders, and another said, “wow… humans probably are really that stupid.”

But let's be fair-- it's a matter of vulnerability, not stupidity. Big Tech utilizes persuasive psychology and neuroscience in every aspect of design, while the average person has little clue (or ability) to counter the most basic of manipulations. Training future scientists to think about philosophical and psychological consequences of tech they create is essential--- yet it's often not required in higher ed. It should be.

(P.S. In my interactions with Anil years ago, he was always incredibly generous and humble. He turned me on to Andy Clark's extended mind thesis back in the day...:-)

Expand full comment
author
Jul 1·edited Jul 1Author

I agree -- and to dismiss this concern as stupid or irrational behavior is incredibly naive. We know that our behavior is deeply influenced by our cognitive biases and heuristics, to the extent that an assumption that we are rational beings also becomes one of these biases -- and this makes us as humans very easy to be influenced by something that presses our cognitive buttons.

Expand full comment
Jul 1Liked by Andrew Maynard

One non-expert thought. If our consciousness is unreproducible because somehow Quantum, can the unpredictable future paths of Quantum computers possibly reopen new possibilities of reproducing consciousness? If not, why not?...

Expand full comment
author

Great question, and Anil touches on this. There are quite deep and interesting theories of the intersection between quantum behaviors and consciousness. The only challenge with reproducing this in machines is the sheer complexity of the technology required. Plus, you still run into many of the challenges around replication and free energy reduction that Anil mentions. But some really interesting possibilities and questions here.

Expand full comment
Jul 1Liked by Andrew Maynard

Thanks for highlighting this article. I was just writing a piece on anthropomorphic AI, so this is quite timely.

What's interesting is that Seth's conclusions are still firmly rooted in the materialist conception that consciousness arises somehow from physical biology, and still reaches the conclusion that it may not be replicated in machines. To say nothing of the idealist position (which seems to have gained more favour alongside relevant discoveries in quantum physics,) which puts the conscious machines firmly in the realm of fantasy.

On that subject, have you seen the new work of Federico Faggin (fwd by Bernado Kastrup) called 'Irreducible'? Worth a peek for a well-rounded idealist synthesis of the quantum sciences, philosophy of consciousness and implications for AI.

Expand full comment

Dear Andrew,

Thank you for this. I read the paper. Very interesting and sobering. Here I offer my thoughts.

I find it problematic that there are no clear definitions of consciousness. However, it would be naive to argue against its existence. I also like to think of consciousness as the feeling of being oneself. In terms of group operations, I consider consciousness as the set of operations that function as the identity operator, coupled with the system under consideration and the environment, for there is no consciousness without an environment (I argue). We can then consider actions like thinking about sitting down while actually sitting down or dreaming while sleeping as simple tests for us to confirm consciousness. This naturally aligns with current informal ideas about consciousness as what disappears under deep general anesthesia. Under anesthesia, we cannot test or generate an action that functions as an identity operator within the system under consideration. Interestingly, there exist operators or actions that may only be inferred implicitly and are therefore described as biases or effects. These yield the high degree of consciousness that we humans attune to and express. For example, human exceptionalism, when we consider humans as unique or exceptional individuals, is an implicit operator that influences our perception of consciousness. Thus, I disagree with the author in thinking that consciousness is something we ‘use’ to exhibit human exceptionalism; rather, I believe that human exceptionalism is an expression of consciousness. I completely agree with the author on the necessity of not conflating intelligence with consciousness. On the other hand, I argue for sentience as a primitive form of consciousness, one that may be achieved with specialized circuits (via tensions, distributions of currents and voltage, and designed functions exhibited by these circuits). However, we cannot ascribe values of identity or consciousness to such machines. For, what does it matter if a superintelligence feels the same way as a fly (in the sense that we, at least most of us, do not give flies moral imperatives as rights), one could (somewhat dangerously) argue.

Thus, it is important not to conflate the ability to exhibit responsiveness, excitatory, or inhibitory responses with the feeling of being – the latter which aligns with the idea of consciousness. We could consider that varying degrees of complexity and flexibility in computational circuits lend some credence to computational functionalism. However, I believe current paradigms may not suffice, and we may need advancements such as electrolyte or chemical exchanges, microfluidics, and neuromorphic computing to mimic or achieve the highest degrees of complexity and flexibility necessary for a system to exhibit consciousness (more on that was written later, as I read through the paper).

I disagree with the author’s view that computational functionalism, derived from the philosophical notion of functionalism, states that consciousness is merely about what a system does and not what it is made of. This perspective undermines our previous agreement on the definition of consciousness. We could argue that a sufficiently complex and flexible set of circuits, coupled with chemical exchanges, tensions, regular Von Neumann architectures and neuromorphic-like computations, could create the conditions through which a set of actions (identity operators or functions), when applied to the system, generate the feeling of being, or consciousness. I also disagree with the author’s claim that the ingredients of computational functionalism are purely computational in nature – that is, computational parameters, inputs, and outputs. This corresponds to the 'what' but misses the 'how.' In this case, the 'how' relates to the special types of circuitry and mechanisms mentioned above, such as neuromorphic-like architectures and chemical reservoirs and exchanges (e.g., electrolytes, fluids, salts). More on that comes later, as I progressed through the article.

While I have not read Shiller’s papers, I disagree with the necessity of there being ‘intrinsic causal powers’ for the parts playing their functional role to be valid and attributable to mental states and consciousness. This argument seems to leap from discussing mental states and consciousness to talking about water alone, making it somewhat naive and unlikely in my view. There is no way to carve up water in any meaningful way, given the absence of major chemical potential differences, which could support Shiller’s claim. I do not fundamentally disagree with the ‘intrinsic causal power’ argument (more on that later, on IIT, which I actually think may be accurate and precise, and thus correct), but I find the water analogy to be naive (even if hyperbolically expressed).

I agree with the author on the issue of mortal computation and the generalized notion of multiple realizability (or substrate neutrality). However, this does not deny the possibility of a potentially conscious AI, which we may 'kill' or 'put to sleep' each time we shut down or restart the system or disconnect it for a while, for example. I do agree that this places constraints on the multiple realizability of these computations, as the author suggested.

I was honestly a little disappointed with section 3.4. It felt like it started weakly, almost as an attack, before jumping to dynamical systems. Many argue about the computability of dynamical systems via algorithms (and concatenation or stacking), and the complexity of continuous processes. It would have been nice to hear from the author that analog computing can be seen as an alternative to dynamical system approaches and that combining both classical computing paradigms and dynamical system paradigms could lead to the emergence of interesting properties. He even mentioned this in a caption about how someone’s research on analog computing led to different properties depending on the temperature of the chips. It's similar with humans; at a high level, compare the differences of people from cold areas and warm areas. Their conscious experience and active inference or sentience processes are generally (on average) different, which we describe as ‘cultural differences’. While writing this, I realized that in section 5.3, the author talks about consciousness in unconventional AI, but I still chose to leave this paragraph for perspective.

Expand full comment
Jul 2·edited Jul 2Liked by Andrew Maynard

...

I like the idea of biological naturalism, but I am not sure it is accurate. The idea of autopoiesis is a little extreme. It's not that we regenerate our own material basis per se; we feed it externally to a large extent. The same can be said of a self-healing or self-writing mechanism on computer hardware, which peels an oxide layer through an external voltage or potential or rewrites a computer program at the software level (even at a low level, following the peeling of the layer, to remain bound to its set of predictive or programmed computational functions), given some thresholds are enabled, achieved, or set. So, I do not agree that we are fully autopoietic nor that computers cannot be made more autopoietic, or that they already are not (at the software level). From the free energy principle perspective, I think computers can also be seen as maintaining their boundaries over time. In fact, boundaries are a huge topic in many disciplines of computer science and computer engineering. Although beyond my current level of understanding (I have not studied or thought about these topics in detail), I do not see how the free energy principle and its relation to cognitive processes and autopoiesis (and further, consciousness) are non-inextricable from dynamical systems-based implementations or general computing paradigms.

I think it's interesting to consider Integrated Information Theory (IIT) and its claims of 'internal causal structure.' It's noteworthy that I have previously, including in my Ph.D. dissertation when I studied basic conductive polymer motifs and categorized them as 'smart materials,' distinguished the 'inner' workings or actions of a system from its 'internal' workings or actions. The latter, I argue, is highly obfuscated and not fully computable or inferable. This complexity or functionality could naturally emerge and does not need to be measured or 'known' for us to assume it exists. In many cases, the presence of both reproducibility and non-linearity can be used to assert that there is something else in a system that we cannot explain. Perhaps this is also abstractly related to consciousness. To clarify, I am not claiming that the materials I studied in my Ph.D. thesis had sentience or consciousness. Rather, I was explaining that there were 'internal' factors about their behavior that we could not explain, distinct from those 'inner' factors that we could more easily infer or empirically deduce based on triangulation, extrapolation, or other forms of deduction based on the data obtained. I know the differentiation between 'internal' and 'inner' may seem pedantic to some, but I think it is nuanced and relevant once you understand what I mean. I disagree with the author’s statement that 'there is also nothing special about life' (if IIT applies). I do not understand how he reaches that conclusion while hypothesizing positively about IIT. It could be that emergence and non-linearity (e.g., deterministic chaos) contribute to this internal causal structure and lack of interpretability. In fact, I would argue that this makes life and the universe even more special.

In the unconventional AI section, I agree with the author about the need to rename the field if architectures that are functionally similar (not necessarily indistinguishable, as he puts it) to the human brain are developed. I would guess it would be something like ALAI, meaning Artificial Life Artificial Intelligence.

I agree with the author that explicitly pursuing the goal of conscious artificial intelligent agents (CAIAs) is a bad idea. In fact, security and safety teams need to have projects on using computational paradigms that prevent the elicitation of consciousness. There are several ideas that come to mind such as using idempotence, continuous switching, to avoid the elicitation of consciousness in sufficiently advanced systems that already exhibit primitive forms of consciousness (cohesive and coherent responsiveness ~ sentience).

Kind Regards,

Daniel

Expand full comment
author

Thanks Daniel, and really appreciate the thoughtful and in depth response! I think one thing that this shows is that there is so much more to learn and understand about consciousness (which, of course, Anil acknowledges) and, while it's possible to develop rational constructs and arguments about its nature, there's a lot of room for building new ideas and theories.

Expand full comment