Navigating the Ethical Dilemmas of Human-Enhancing Brain-Computer Interfaces
An important new essay by Emma Gordon and Anil Seth explores the ethical, legal and scientific implications of BCIs' designed to enhance performance
In 2019, researchers at Neuralink published a paper in the Journal of Medical Internet Research that described the company’s research on invasive brain-machine interfaces (BCIs) — devices inserted into the outer laters of the brain to allow signals to be extracted from and read to small clusters of neurons.1
The paper preceded the company’s experimentation on primates and the use of its devices in human trials. And it only focused on medical applications of Neuralink’s approach to BCIs. Nevertheless it also opened the door for more serious considerations of the technology’s use in human enhancement and the ethical challenges this presents — a topic that Emma Gordon and Anil Seth address in a rather excellent new essay in PLOS Biology.
Going back to 2019, my colleague Marissa Scragg and I were asked to write a response to the Neuralink paper that focused on the ethical aspects of the company’s work at the time. That paper — The Ethical and Responsible Development and Application of Advanced Brain Machine Interfaces — was one of the first to explicitly explore some of the potential challenges that a company like Neuralink might face as it developed its technology.
Five years on and the landscape around the ethics of BCIs has evolved — especially around their potential use for human enhancement — and the questions we began asking in 2019 have taken on much greater urgency.
And so I was very pleased to see Emma and Anil expertly mapping out emerging ethical considerations for the use of invasive brain-computer interfaces for cognitive enhancement.2
Their essay is one that I’d encourage anyone who’s interested in the emergence of invasive BCIs designed for human enhancement (or eBCIs) to read in full. It’s authoritative,3 clearly written, and provides a great overview of BCIs as well as the ethical questions they’re raising.
Have you checked out the new Modem Futura podcast yet?
Gordon and Seth start out with a clear description of BCIs, and the distinction between non-invasive technologies such as transcranial magnetic stimulation (TMS) and the use of electroencepholagram (EEG) signals, and invasive technologies that include electrocorticographic recordings and implants like the ones used by companies such as Neuralink and Synchron.
They then focus in on the possible uses of implants in cognitive enhancement, and begin to map out the potential these technologies have — as well as the limitations that they face — as a precursor to grappling with six key ethical questions.
I very much appreciated their scene setting here, as it allows the following discussion around ethical concerns to be grounded in plausible challenges rather than hyperbole.
Having addressed what BCIs are, Gordon and Seth move on to addressing the engineering challenges presented by them (which are hard), and the scientific challenges (which are harder still).
On the engineering side, they recognize the challenges of getting large numbers of electrodes to interface with a significant number of neurons (even with current technologies only a minuscule fraction of the human brain’s neurons can be addressed), and the problems associated with ensuring electrodes stay put over time — something Neuralink has been struggling with in its in-human trials.
But it’s the scientific challenge of understanding the brain well enough to use BCIs effectively which, they stress, really challenges what eBCIs might be capable of.
Here, they focus on the need to understand the brain in order to manipulate it in intentional and beneficial ways. And they strongly indicate that we need better scientific causal models of how the brain works if we’re to realize the benefits of the technology, while avoiding ethically questionable trial and error approaches to its development and use that substitute understanding with brute force.
This though, as Gordon and Seth point out, is fiendishly difficult. Because our understanding of the brain is still so limited, developing effective BCIs is simply not possible through engineering-based solutions alone — no matter how impressive they seem.
I love the analogy they use here of space flight — a deeply impressive human accomplishment that has, nevertheless, primarily relied on engineering solutions because the science behind it is relatively well understood. It’s a great reminder that BCIs are not “rocket science” because, unlike rocket science, we don’t yet have the science to underpin the engineering that advances will rely on.
Yet despite this, Gordon and Seth throw a bone to engineers who can’t wait for the science to catch up. And they do this by suggesting that artificial intelligence may “soften” if not completely eliminate the science challenges facing the development of successful BCIs.
At this point it’s hard to tell how far AI-driven engineering solutions might support BCIs designed to enhance performance — and Gordon and Seth suggest that near term technologies may be “limited to controlling apps on phones or other similarly prosaic activities”. But they also acknowledge that, in spite of the considerable challenges, BCIs still hold promise for human enhancement in the future.
At this point in the essay, Gordon and Seth “set practicality and feasibility to one side and give imagination free reign” as they consider what might be achieved with eBCIs. It’s a technique that I’ve used in the past to explore the plausible boundaries of emerging technologies, and one that works well for closing the shutters on hyperbolic speculation by effectively saying this is what we can create in our imagination, and this is why it’s destined to stay there.
It’s also a critically important step toward ensuring ethical questions and constraints are grounded in plausibility rather than hyperbole.
Gordon and Seth’s “free reign” imagination is a delight to read — so much so that I took the liberty of quoting a whole chunk of it below:
“One can imagine eBCIs that enhance memory, by providing instant thought-based access to both autobiographical and semantic information stores. eBCIs might improve attentional focus by directly modulating neural circuits involved in sustaining attention. Relatedly, eBCIs might also be able to modulate mood, in order to allow people to function more effectively in various demanding situations. Perhaps eBCIs could allow people a wider bandwidth of control and communication by transcending the limits of having only 1 mouth and 2 hands. More generally, eBCIs might bring about substantial epistemic benefits, by dramatically changing capacities for knowledge access and discovery. Language barriers might become a thing of the past, once eBCIs can implement direct thought-based translation. Even more provocatively, could eBCIs increase the “bit rate” of human cognition, as the Neuralink founder has suggested ?
“Other examples arise when we widen the lens. eBCIs might be able enhance our physical prowess as well as our cognitive capability, perhaps by improving reaction times or sensory sensitivity. And why not use eBCIs to endow us with entirely new sensory and perceptual modalities, by stimulating the brain with signals derived from things like sonar, compass direction, or even current stock prices? Consider the potential enhancement benefit of being able, as technology advances to control additional robotic limbs, when two hands are not enough. Or consider, for example, being able to decode a stream of thought about one topic, while simultaneously holding a conversation about another. What about being able to dramatically accelerate your rate of thinking, much as you might speed up the playback rate for an audiobook?
“Then there are wider benefits that may accrue to society at large. If many people augment their individual cognitive prowess, perhaps this could lead to more efficient and effective coordination and synthesis of information, leading in turn to enhanced capability to address societal challenges such as curing disease, mitigating climate change, and the like. eBCIs might even allow direct brain-to-brain interactions, so that the extended mind now becomes an extended collective mind, with additional capabilities that would be difficult to anticipate.”
I’ve stripped the above passage of its citations — you’ll have to read the original for these. Suffice to say that, wild as the ideas are, they reflect the thinking of a good number engineers and technologies working in this space.
After this speculative buildup, Gordon and Seth rather abruptly burst the reader’s bubble by stating “Such scenarios are not just implausible by the scientific and engineering criteria discussed earlier; they are psychologically and philosophically implausible too.”4
They hammer this home by pointing out that “this kind of untrammelled neuro-utopianism is useful only to sketch the space of possible futures. Our imagination ought not be given entirely free reign, but—to be useful—should be reined in by what is, and what is likely to be, practical, feasible, and ethical”5 — and in doing so, they segue to navigating the ethical concerns raised by eBCIs.
Having explored the challenges and opportunities presented by eBCIs, Gordon and Seth pull themselves back to more plausible near futures and address six questions that, to them, are critical to navigating the path between what is possible, and what is ethical.
These questions address potential erosion of privacy, technological inequality, the emergence of a “mental monoculture,” the risk of inauthenticity, the threat of cheapened achievements, and the loss of autonomy — accompanied by the dangers of “hyperagency.”
Gordon and Seth do an excellent job in the essay of addressing each of these in turn. Below I highlight just some of the ideas they explore that particularly stood out to me:
Erosion of privacy
Under erosion of privacy Gordon and Seth point out that, while there are protections in place in many societies around how thoughts are expressed through actions (including actions such as protests and speech), eBCIs will potentially push us into new territory if it’s possible to reveal what someone’s thinking before they act — or even if it’s possible to manipulate these thoughts before they’re translated into action.
Should thoughts and ideations remain private, even if the capacity is there to reveal them? What responsibility lies with companies controlling the flow of information from and to eBCIs — especially if it reveals ideas or latent intent that are socially concerning? And to what extent should companies be allowed to monetize or otherwise extract value from neural data?
These questions become even more pertinent when a potential deluge of future neural data are seen as high-value training data for artificial intelligence models. Gordon and Seth raise this concern, but stop short of pushing it to a number of plausible conclusions — one being eBCI users being incentivized to “share” their neural data through free or reduced cost subscriptions services.
Inequality
The question of possible incentives is also relevant to the second question Gordon and Seth raise — that of technological inequality associated with preferential access to eBCIs.
Gordon and Seth don’t address incentives directly here. But they do discuss the possible dangers of a positive cognitive inequality feedback loop between those privileged enough to get eBCIs, and those who are not. And extrapolating from this, it seems likely that eBCI-driven societal inequalities will lead to creative ways of companies extracting value from users who can not otherwise afford the technology.
Mental monoculture
If, by various mechanisms, eBCIs do become ubiquitous, Gordon and Seth raise a third concern — the danger of creating a “mental monoculture” where, because everyone is locked into similar enhanced cognitive patterns, there’s a decrease in creativity and innovation across society.
This potential risk intrigued me. It’s not one I’d considered before, but it makes a lot of sense given the possibility of certain patterns of thought and behavior being favored by mass produced devices. It also hints at a more sinister manipulation of “group think” by dominant producers, controlling governments, or even AI systems integrated into increasingly sophisticated eBCIs.
Inauthenticity
The potential risks of a mental monoculture led to an even more esoteric-sounding risk in Gordon and Seth’s fourth question — the possibility of eBCI’s impacting who we understand ourselves to be, and the danger of eBCI-caused “inauthenticity.”
As Gordon and Seth note, “the core idea is that living in accordance with one’s ‘true self’ is a crucial component of human flourishing”. If eBCIs alter cognitive function to the extent that users are no longer sure what their true self is, do we have a problem?
This is as much a philosophical problem as it is a cognitive question, as its resolution depends on how the concept of “true self” is defined. If it’s believed that our true self is defined by something in all of us that’s constant — not changing over time and circumstance — then a technology that interferes with our ability to discover and live by this risks undermining our authenticity; our sense of who we are and how we express this.
On the other hand — as Gordon and Seth note — if being true to oneself is to live consistently against values that emerge and develop over time, then a mind-altering eBCI simply becomes part of the process of discovering who you are — enhancements and all.
This question of authenticity is one that isn’t unique to eBCIs. Rather, it applies to pretty much any technology that has the potential to modulate or otherwise interfere with our understanding of “true self”. But any potential future ability to profoundly impact who we think we are by directly manipulating the brain raises these questions to a new level.
Cheapened achievements
Gordon and Seth’s fifth ethical question concerns the possible dangers of “cheapened achievements,” and whether eBCIs will erode the perceived or actual value of what people are capable of.
This is one that I struggled with — in part because it depends on whether you think there is intrinsic value in what a person achieves, or whether you believe that human achievements are a means to an end. And as Gordon and Seth acknowledge, “recent literature and bioethics suggests that the ‘cheapened achievements’ worry is over-stated as a general argument”.
If what we achieve as biological beings — our creativity, our imagination, our art, our understanding of who we are and the universe we live in — is essential to defining what it means to be human, then eBCIs that diminish what we are capable of achieving unaided could be seen as an existential threat to who we are. And if we understand our brains to be the seat of our minds and, ultimately, our identity, this could end up posing a very real threat.
But there are plenty of counterarguments to this, not least that we’ve been using artificial means to enhance our minds in order to achieve stuff for millennia. And eBCIs are just the latest in a long string of tools for doing this.
There’s also the question of the extent to which who we are is defined by what, as a society, we are capable of achieving — enhancements and all.
Whichever side of the “cheapened achievements” fence you stand, it does seem though that eBCIs raise both possibilities and concerns that push thinking around technological augmentation and value-creation beyond where it currently lies.
And this leads to Gordon and Seth’s sixth and final ethical question — the tension between loss of agency versus “hyperagency.”
Loss of autonomy vs hyperagency
In this section of the essay, the threat of loss of autonomy is reasonably self-evident: it reflects the potential for an eBCI to interfere with the chain of events between what we intend, and what occurs as a result.
This could be as simple as eBCI algorithms misinterpreting neural signals in ways that lead to unintended actions. Or it could conceivably be a result of an AI-controlled eBCI making decisions on a user’s behalf — an electronic conscience or set of guard rails that decide what we should or should not do, rather than simply allowing us to do what we want to do.
And, of course, there is always the possibility of malicious overrides between what we ideate and what occurs as a result — or even more worrying, an eBCI that has the capacity to read ideas, thoughts and intentions into our brain so that what we think of as our ideas are, in fact, not.
These are all real and serious ethical concerns for future eBCIs. But it was the flip side of this autonomy coin that Gordon and Seth sketch out which really intrigued me — and this is the possibility of such a strong eBCI enabled alignment between what we intend and what we are able to achieve that we experience an “explosion of responsibility” that we’re not ready for.
As Gordon and Seth explain:
“Such a concern has been expressed by Sandel.6 As he sees it, when we become more capable via enhancement, we thereby become to that extent more responsible for how our lives go. Such a boost in personal responsibility, as he sees it, may have its own risks for our overall well-being; the more we are responsible for—i.e., the more we transform into “hyperagents”—the fewer excuses we have for mistakes or bad choices, and the standard to which we are held may be dauntingly high.”
Watching how many people seem to be quite comfortable with behaving irresponsibly in the face of current technological capabilities, I’m not sure I can see eBCI users going into an existential tailspin around the responsibility that comes from hyperagency. But I may be wrong.
Either way, this is a possibility that we should probably be discussing more than we are.
The skull as a sacrosanct barrier
Gordon and Seth conclude with what is almost an aside about the significance of the skull in how we think about eBCIs.
As they point out, while there doesn’t seem to be anything biologically special about the skull, it seems to represent a boundary that puts us in new ethical territory as we cross it.
This probably stems in part from a sense — in Western thinking at least — that the essence of who we are lies within the brain, and that the skull is a sacrosanct barrier between what makes us us and the world around us.
As Gordon and Seth point out, this makes less sense the more we learn about ourselves. At the same time, they speculate that “once one breaches [the skull], no further boundaries remain” and “The importance of the skull may therefore lie in preserving the idea of autonomy, rather than in preserving anything particular about autonomy or agency itself.”
And this trips over into their final thought, which pins the whole essay and emphasizes why, even when eBCIs are at such an early stage of development, we ignore the ethical questions they raise at our peril:
As Gordon and Seth conclude, “The merging of our neurobiology with technology raises fundamental questions about who, and what, we are—and who and what we can, and ought, to be.”
Emma Gordon and Anil Seth’s essay “Ethical considerations for the use of brain-computer interfaces for cognitive enhancement” can be read for free at PLOS Biology — I’d highly recommend it.
Rather bizarrely, the first author on the paper is Elon Musk, with the second author being the company — Neuralink. As it’s clear that the bulk of the research presented the paper was not conducted by Musk, it’s unclear why this ethically dubious choice of authorship was made.
Gordon, E. C. and A. K. Seth (2024). "Ethical considerations for the use of brain–computer interfaces for cognitive enhancement." PLOS Biology 22(10): e3002899. DOI: https://doi.org/10.1371/journal.pbio.3002899
Both Emma and Anil are respected academics in their fields. Anil is also a Fellow of the prestigious Canadian Institute for Advanced Research Program on Brain, Mind, and Consciousness (and in full disclosure, I’m a member of the CIFAR Research Council).
I stopped short of including my favorite quote in the paper here but couldn’t resist including it here in the footnotes, if only to make it clear to every student who’s told me that they “can’t write academic” that its not how you write but what you say that’s important — as long as there’s rigor and scholarship behind it: “These feasibility constraints substantially undermine the likelihood of a future in which we all pop along to our high-street neurosurgeon, install the latest eBCI, and emerge newly super-intelligent—and with a new hole in the head.” 😄
I really wanted to add an “amen” here but thought it would probably be inappropriate in the main text! That said, this is a perspective that every entrepreneur, founder, and student studying innovation, should have stamped indelibly in their thinking.
Sandel MJ. The case against perfection: Ethics in the age of genetic engineering. Harvard University Press; 2007. https://scholar.harvard.edu/sandel/publications/case-against-perfection-ethics-age-genetic-engineering