Neuralink’s Technology Is Impressive. Is It Ethical?
Elon Musk’s audacious vision of a smartphone-controlled brain-machine interface comes with potential risks.
Imagine being able to walk into a strip-mall and have thousands of microscopically-fine electrodes inserted into your brain, all implanted as quickly and as efficiently as if you were having LASIK eye surgery, and designed to boost your brain from a simple smartphone app.
Until this week, this was the stuff of science fiction. Yet at a launch event this past Tuesday, the company Neuralink — founded by Elon Musk — claimed they were on track to achieve this and more over the next few years.
Neuralink’s brain-machine interface technology is deeply impressive. Using Musk’s now familiar model of bringing together new talent from different fields to accelerate the rate of technological innovation, the company has made massive strides in what is achievable. But despite the technical promise of wireless read-write brain-machine interfaces, companies like Neuralink are in danger of getting so wrapped up in what they can do, that they lose sight of the ethics behind what they should do
The ethics of neurotechnology
In my 2018 book, Films from the Future, I wrote about what we then knew about Musk’s creation of a neural lace, a term that comes from the science fiction of Iain M. Banks and describes a future brain-computer interface. But now that the future is a little closer, I have some more thoughts on the potential risks and ethical issues surrounding Neuralink. Although we’re still discovering how important our whole body is in influencing who we are, we still think of our brain as the organ that ultimately defines us. This is where the roots of our sense of self and identity lie, where we receive and process data, where our intellect and reason are seated, and where our deepest feelings and aspirations reside.
This is, in part, why ethics are so important in the development of responsible neurotechnologies. But these technologies also come with social risks, which further complicates matters. When a new technology has the potential to change collective behavior, disrupt social norms, or undermine established values, there are broader ethical questions around where the boundaries between “can” and “should” lie.
Until this past week, these were largely theoretical questions. Basic neurotechnologies have been around for a while — including technologies like cochlear implants and deep brain stimulation and more complicated brain-computer interfaces. They are sufficiently basic that they’ve allowed breathing space for accompanying conversations around their ethical development and use.
But with Neuralink’s launch event and the accompanying paper on their underlying technology, these and larger ethical questions have taken on a new urgency.
Pushing the boundaries of what’s possible
What makes Neuralink’s advances so potentially disruptive are their technological feasibility. This isn’t vaporware — the tech the company is working on appears to be grounded in solid science and engineering. While the current state-of-the-art allows limited numbers of crude electrodes to be hardwired into critical parts of the brain, Neuralink is developing integrated solutions where tens of thousands of ultrafine, flexible, read-write electrodes can be precisely inserted into the brain. These are placed using cutting-edge precision robotics, and will eventually be wirelessly controlled from a smartphone app to combat neurological disorders.
This type of speculation is rarely helpful when trying to navigate the landscape between a powerful technological capability, and its ethical and socially responsible development.
This, however, is just a taste for what’s coming down the pike. Using the platforms they’ve developed, Neuralink’s long-term objectives are to enhance how our brains function by adding a third artificial processing layer to them, an easy surgery that might take only a few hours. Based on current progress, this ambition is well within the bounds of possibility.
Yet as the late Stan Lee might have observed, with great power comes great responsibility. And this is where Neuralink and others in the field need to be thinking critically about how to innovate both responsibly and ethically.
Ethical challenges
As always, there’s a danger of paralysis by analysis as soon as anyone brings up the ethics of advanced technologies like brain-machine interfaces. We can all speculate about the potential psychological harms of advanced brain-computer interfaces, or the dangers of brain-hacking or mind-jacking. And it’s easy to imagine dystopian visions of a future where social behavior is controlled by machines, as we sacrifice autonomy for neural lace convenience.
Yet this type of speculation is rarely helpful when trying to navigate the landscape between a powerful technological capability and its ethical and socially responsible development. Instead, despite the temptation to sensationalize and even fictionalize potential risks, there’s an urgent need for informed thinking about plausible issues, and how to navigate them. And in the case of Neuralink, this means grappling with three specific areas of ethical and responsible innovation.
Physiological impacts
First, there are the potential acute and chronic physiological impacts associated with inserting thousands of electrodes into the brain. Ensuring the safety of this tech is far from trivial. Yet here, I’m reasonably confident that regulators, researchers, and developers will be able to identify and navigate the key challenges. From having worked for many years on the potential health risks of novel materials, including nanoparticles, I have a lot of respect for the scientists and regulators who will be working to ensure the neurological medical devices developed by Neuralink do as little harm as possible. But at the same time, they’re going to have to be open to new ideas as the technology breaks new ground.
Psychological and behavioral impacts
The second area is more tricky, and concerns potential psychological and behavioral impacts. Where the technology is being used for medical purposes, there will always be tradeoffs between the benefits of neural interfaces, and how these might affect a person’s mental state and behavior. But as the technology moves from remediation to enhancement, potential behavioral and mood changes will demand much greater scrutiny.
For example, is there a likelihood of personality changes or addictive behavior, or the emergence of chronic psychological disorders, as people begin to use these devices? Here, there is a risk that long lag times between the widespread use of the technology and the emergence of psychological issues could further complicate things. This could spell disaster if people become dependent on the technology before the long-term impacts are fully understood.
Then there’s a third area of ethical concerns, and the potentially broader, societal impacts of the technology.
While Neuralink is currently focused on using its technology to address medical conditions, the company’s long-term goal is to create an artificial internet-connected overlay to the brain that enables users to interface with future intelligent machines. This is an audacious goal that is very overtly aimed at changing society. And because of this, it raises questions around ethics and responsibility that have to be grappled with while there’s still an opportunity to steer the technology toward responsible ways of using it.
For instance, if at some point in the future you get a Neuralink implant to enhance your mental abilities, or for recreational purposes, who owns that implant and has access to its data and functions? Based on current law, it’s almost definitely not you.
This may seem fine, until the company who owns the device threatens to deactivate it unless you pay for the latest upgrade, or you find yourself vulnerable to hackers because you didn’t buy into the upgrade plan. Who owns the device also raises questions around who owns your brain signals and even who has the right to write data to your brain. We might be looking at a future where mandatory auto-updates rewrite your hardware as well as your physical mind.
This neural “write” capability of Neuralink’s technology raises a number of other issues. It’s a capability that’s essential for planned medical applications. But it’s easy to imagine people wanting to use the technology for enhancement — to increase cognitive ability, physical dexterity, perceptions, mood, and even personality.
Imagine being able to sharpen your mind or increase memory retention with an app on your phone, or change your mood at the flick of a switch. You could integrate Neuralink into gaming systems so you could viscerally feel the action as well as see it. A neural implant could also be used to amplify on-screen emotion when watching movies.
These capabilities are likely to become feasible in the near to medium future, but there are potential downsides. Imagine ads that trigger an emotional response, news feeds that can manipulate how you feel, or apps that allow others to alter how you behave with a simple text. And on top of this, the dangers of having your smartphone stolen or hacked take on a whole other dimension.
Now we’re heading into speculative territory. And to be fair, Musk has made it clear that he’s against the idea of funding neural implants with neural adverts. Yet, as the technology matures, these are possibilities that need to be explored if Neuralink is to be developed and used ethically and responsibly.
Let’s suppose that none of the aforementioned negatives come to pass. There’s still the question of who gets access to the technology, and who does not. If brain-computer interfaces truly do hold the ability to substantially enhance what a user can achieve, are we in danger of creating a two-tier society where the privileged are able to get better jobs, earn more, and have a higher quality of life, compared to those who are too poor or too “unworthy” in the eyes of society to get hold of the tech?
This is not an idle question. Already, there is social disparity around who gets to benefit from new technologies that increases the divide between the privileged and the marginalized in society. We must consider the possibility that neural implants could massively widen this gap.
Moving forward responsibly
Unless ethical questions like these are addressed early on, we’re either looking at a future where brain-computer interfaces create more problems than they solve, or one where Neuralink has gone bust because it didn’t take the social and ethical concerns seriously enough from the beginning.
The good news is that there’s still time for Neuralink and others to develop a robust strategy for ethical and responsible innovation, so everyone can realize the full benefits of the technology. There are resources that can help here — the Risk Innovation Accelerator that’s a part of my Risk Innovation Lab is just one of them. But unless we start a wider, deeper, and more informed set of conversations, the future of Musk’s vision doesn’t look quite as rosy as he might hope.
Update: Elon Musk and Neuralink’s paper “An Integrated Brain-Machine Interface Platform With Thousands of Channels” was published in the Journal of Medical Internet Research on October 31 2019. It was accompanied by a commentary on “The Ethical and Responsible Development and Application of Advanced Brain Machine Interfaces.”