Navigating the Complex World of Advanced Brain-Machine Interfaces
Does Elon Musk’s Neuralink have what it takes to succeed in the exceptionally complex risk landscape of human augmentation?
In 2018, I wrote extensively about the emerging opportunities and challenges around augmentation technologies in the book Films from the Future — including the advances being promised by Elon Musk’s company Neuralink.
As Neuralink gears up to demonstrate their latest advances in cutting edge brain-machine interface technology, I thought it worth posting a few relevant excerpts from the book here. These are from the chapter that is inspired by the 1995 Anime movie Ghost in the Shell, and focuses on the opportunities and challenges surrounding human augmentation.
Through a Glass Darkly
On June 4, 2016, Elon Musk tweeted: “Creating a neural lace is the thing that really matters for humanity to achieve symbiosis with machines.”
This might just have been a bit of entrepreneurial frippery, inspired by the science fiction writer Iain M. Banks, who wrote extensively about “neural lace” technology in his Culture novels. But Musk, it seems, was serious, and in 2017 he launched a new company to develop ultra-high-speed speed brain-machine interfaces.
Musk’s company, Neuralink, set out to disrupt conventional thinking and transform what is possible with human-machine interfaces, starting with a talent-recruitment campaign that boldly stated, “No neuroscience experience is required.” Admittedly, it’s a little scary to think that a bunch of computer engineers and information technology specialists could be developing advanced systems to augment the human brain. But it’s a sign of the interesting times we live in that, as entrepreneurs and technologists become ever more focused on fixing what they see as the limitations of our biological selves, the boundaries between biology, machines, and cyberspace are becoming increasingly blurred.
Plugged In, Hacked Out
In Western culture, we deeply associate our brains with our identity. They are the repository of the memories and the experiences that define us. But they also represent the inscrutable neural circuits that guide and determine our perspectives, our biases, our hopes and dreams, our loves, our beliefs, and our fears. Our brain is where our cognitive abilities reside (“gut” instinct not withstanding); it’s what enables us to form bonds and connections with others, and it’s what determines our capacity to be a functioning and valuable part of society — or so our brains lead us to believe. To many people, these are essential components of the cornucopia of attributes that define them, and to lose them, or have them altered, would be to lose part of themselves.
This is, admittedly, a somewhat skewed perspective. Modern psychology and neurology are increasingly revealing the complexities and subtleties of the human brain and the broader biological systems it’s intimately intertwined with. Yet despite this, for many of us, our internal identity — how we perceive and understand ourselves, and who we believe we are—is so precious that anything that threatens it is perceived as a major risk. This is why neurological diseases like Alzheimer’s can be so distressing, and personality changes resulting from head traumas so disturbing. It’s also why it can be so unsettling when we see people we know undergoing changes in their personality or beliefs. These changes force us to realize that our own identity is malleable, and that we in turn could change. And, as a result, we face the realization that the one thing we often rely on as being a fixed certainty, isn’t.
Over millennia, we’ve learned as a species to cope with the fragility of self-identity. But this fragility doesn’t sit comfortably with us. Rather, it can be extremely distressing, as we recognize that disease, injuries, or persuasive influences can change us. As a society, we succeed most of the time in absorbing this reality, and even in some cases embracing it. But neural enhancements bring with them a brand new set of threats to self-identity, and ones that I’m not sure we’re fully equipped to address yet, including vulnerability to outside manipulation.
Elon Musk’s neural lace is a case in point, as a technology with both vast potential and largely unknown risks. It’s easy to imagine how overlaying the human brain with a network of connections, processors and communications devices could vastly enhance our abilities and allow us to express ourselves more completely. Imagine if you could control your surroundings through your thoughts. Or you could type, or search the net, just by thinking about it. Or even if you could turbocharge your cognitive abilities at the virtual press of a button, or change your mood, recall information faster, get real-time feedback on who you’re speaking with, save and recall experiences, manipulate vast cyber networks, all through the power of your mind. It would be like squeezing every technological advancement from the past five hundred years into your head, and magnifying it a hundred-fold. If technologies like the neural lace reached their full potential, they would provide an opportunity for users to far exceed their full biological potential, and express their self-identity more completely than ever before.
It’s not hard to see how seductive some people might find such a technology. Of course, we’re a long, long way from any of this. Despite massive research initiatives on the brain, we’re still far from understanding the basics of how it operates, and how we can manipulate this. Yet this is not stopping people from experimenting, despite what this might lead to.
In 2014, the neurosurgeon Phil Kennedy underwent elective brain surgery, not to correct a problem, but in an attempt to create a surgically implanted brain-machine interface. Kennedy had developed a deep brain probe that overcame the limitations of simply placing a wire in someone’s brain, by encouraging neurons to grow into a hollow glass tube. By experimenting on himself, he hoped to gain insight into how the parts of the brain associated with language operate, and whether he could decode neural signals as words. But he also had a vision of a future where our brains are intimately connected to machines, one that he captured in the 2012 novel 2051, published under the pseudonym Alpha O. Royal.
In this brief science fiction story, Kennedy, a.k.a. Alpha O. Royal, describes a future where brains can be disconnected from their bodies, and people can inhabit a virtual world created by sensors and probes that directly read and stimulate their neurons. In the book, this becomes the key that opens up interplanetary travel, as hurling a wired-up brain through space turns out to be a lot easier than having to accompany it with a body full of inconvenient organs. Fantastical as the book is, Kennedy uses it to articulate his belief that the future of humanity will depend on connecting our brains to the wider world through increasingly sophisticated technologies; starting with his hollow brain probes, and extending out to wireless-linked probes, that are able to read and control neurons via light pulses.
Amazingly, we are already moving closer to some of the sensing technology that Kennedy envisions in 2051. In 2016, researchers at the University of California, Berkeley announced they had built a millimeter-sized wireless neural sensor that they dubbed “neural dust.” Small numbers of these, it was envisaged, could be implanted in someone’s head to provide wireless feedback on neural activity from specific parts of the brain. The idea of neural dust is still at a very early stage of development, but it’s not beyond the realm of reason that these sensors could one day be developed into sophisticated wireless brain interfaces. And so, while Kennedy’s sci-fi story stretches credulity, reality isn’t as far behind as we might think.
There’s another side of Kennedy’s story that is relevant here, though. 2051 is set in a future where artificial intelligence and “nanobots” (which we’ll reencounter in chapter nine) have become a major threat. In an admittedly rather silly plotline, we learn that the real-life futurist and transhumanist Ray Kurzweil has loaned the Chinese nanobots which combine advanced artificial intelligence with the ability to self-replicate. These proceed to take over China and threaten the rest of the world. And they have the ability to hack into and manipulate wired-up brains. Because everything that these brains experience comes through their computer connections, the AI nanobots can effectively manipulate someone’s reality with ease, and even create an alternate reality that they are incapable of perceiving as not being real.
The twist in Kennedy’s tale is that the fictitious nanobots simply want global peace and universal happiness. And the logical route to achieving this, according to their AI hive-mind, is to assimilate humans, and convince them to become part of the bigger collective. It’s all rather Borg-like if you’re a Start Trek fan, but with a benevolent twist.
Kennedy’s story is, admittedly, rather fanciful. But he does hit on what is probably one of the most challenging aspects of having a fully connected brain, especially in a world where we are seceding increasing power to autonomous systems: vulnerability to hacking.
Some time ago, I was speaking with a senior executive at IBM, and he confessed that, from his elevated perspective, cybersecurity is one of the greatest challenges we face as a global society. As we see the emergence of increasingly clever hacks on increasingly powerful connected systems, it’s not hard to see why.
Cyberspace — the sum total of our computers, the networks they form, and the virtual world they represent — is unique in that it’s a completely human-created dimension that sits on top of our reality (a concept we come back to in chapter nine and the movie Transcendence). We have manufactured an environment that quite literally did not exist until relatively recently. It’s one where we can now build virtual realities that surpass our wildest dreams. And because, in the early days of computing, we were more interested in what we could do rather than what we should (or even how we should do it), this environment is fraught with vulnerabilities. Not to put too fine a point on it, we’ve essentially built a fifth dimension to exist in, while making up the rules along the way, and not worrying too much about what could go wrong until it was too late.
Of course, the digital community learned early on that cybersecurity demanded at least as much attention to good practices, robust protocols, smart design, and effective governance as any physical environment, if people weren’t going to get hurt. But certainly, in the early days, this was seasoned with the idea that, if everything went pear-shaped, someone could always just pull the plug.
Nowadays, as the world of cyber is inextricably intertwined with biological and physical reality, this pulling-the-plug concept seems like a quaint and hopelessly outmoded idea. Cutting off the power simply isn’t an option when our water, electricity, and food supplies depend on cyber-systems, when medical devices and life-support systems rely on internet connectivity, where cars, trucks and other vehicles cannot operate without being connected, and where financial systems are utterly dependent on the virtual cyber worlds we’ve created.
It’s this convergence between cyber and physical realities that is massively accelerating current technological progress. But it also means that cyber-vulnerabilities have sometimes startling real-world consequences, including making everything from connected thermostats to digital pacemakers vulnerable to attack and manipulation. And, not surprisingly, this includes brain- machine interfaces.
In Ghost in the Shell, this vulnerability leads to ghost hacking, the idea that if you connect your memories, thoughts, and brain functions to the net, someone can use that connection to manipulate and change them. It’s a frightening idea that, in our eagerness to connect our very soul to the net, we risk losing ourselves, or worse, becoming someone else’s puppet. It’s this vulnerability that pushes Major Kusanagi to worry about her identity, and to wonder if she’s already been compromised, or whether she would even know if she had been. For all she knows, she is simply someone else’s puppet, being made to believe that she’s her own person.
With today’s neural technologies, this is a far-fetched fear. But still, there is near-certainty that, if and when someone connects a part of their brain to the net, someone else will work out how to hack that connection. This is a risk that far transcends the biological harms that brain implants and neural nets could cause, potentially severe as these are. But there’s perhaps an even greater risk here. As we move closer to merging the biological world we live in with the cyber world we’ve created, we’re going to have to grapple with living in a world that hasn’t had billions of years of natural selection for the kinks to be ironed out, and that reflects all the limitations and biases and illusions that come with human hubris. This is a world wherein human-made monsters lie waiting for us to stumble on them. And if we’re not careful, we’ll be giving people a one-way neurological door into it.
Not that I think this should be taken as an excuse not to build brain- machine interfaces. And in reality, it would be hard to resist the technological impetus pushing us in this direction. But at the very least, we should be working with maps that says in big bold letters, “Here be monsters.” And one of the “monsters” we’re going to face is the question of who has ultimate control over the enhanced and augmented bodies of the future.
Your Corporate Body
If you have a body augmentation or an implant, who owns it? And who ultimately has control over it? It turns out that if you purchase and have installed a pacemaker or implantable cardiovascular defibrillator, or an artificial heart or other life-giving and life-saving devices, who can do what with it isn’t as straightforward as you might imagine. As a result, augmentation technologies like these raise a really tricky question — as you incorporate more tech into your body, who owns you? We’re still a long way from the body augmentations seen in Ghost in the Shell, but the movie nevertheless foreshadows questions that are going to become increasingly important as we continue to replace parts of our bodies with machines.
In Ghost, Major Kusanagi’s body, her vital organs, and most of her brain are manufactured by the company Megatech. She’s still an autonomous person, with what we assume is some set of basic human rights. But her body is not her own. Talking with her colleague Batou, they reflect that, if she were to leave Section 9, she would need to leave most of her body behind. Despite the illusion of freedom, Kusanagi is effectively in indentured servitude to someone else by virtue of the technology she is constructed from.
Even assuming that there are ethical rules against body repossession, Kusanagi is dependent on regular maintenance and upgrades. Miss a service, and she runs the risk of her body beginning to malfunction, or becoming vulnerable to hacks and attacks. In other words, her freedom is deeply constrained by the company that owns her body and the substrate within which her mind resides.
In 2015, Hugo Campos wrote an article for the online magazine Slate with the sub-heading, “I can’t access the data generated by my implanted defibrillator. That’s absurd.” Campos had a device inserted into his body — an Implantable Cardiac Defibrillator, or ICD — that constantly monitored his heartbeat, and that would jump-start his heart, were it to falter. Every seven years or so, the implanted device’s battery runs low, and the ICD needs to be replaced, what’s referred to as a “generator changeout.” As Campos describes, many users of ICDs use this as an opportunity to upgrade to the latest model. And in his case, he was looking for something specific with the changeout; an ICD that would allow him to personally monitor his own heart.
This should have been easy. ICDs are internet-connected these days, and regularly send the data they’ve collected to healthcare providers. Yet patients are not allowed access to this data, even though it’s generated by their own body. Campos’ solution was to purchase an ICD programmer off eBay and teach himself how to use it. He took the risk of flying close to the edge of legality to get access to his own medical implant.
Campos’ experience foreshadows the control and ownership challenges that increasingly sophisticated implants and cyber/ machine augmentations raise. As he points out, “Implants are the most personal of personal devices. When they become an integral part of our organic body, they also become an intimate part of our identity.” And by extension, without their ethical
and socially responsive development and use, a user’s identity becomes connected to those that have control over the device and its operations.
In the case of ICDs, manufacturers and healthcare providers still have control over the data collected and generated by the device. You may own the ICD, but you have to take on trust what you are told about the state of your health. And you are still beholden to the “installers” for regular maintenance. Once the battery begins to fail, there are only so many places you can go for a refit. And unlike a car or a computer, the consequence of not having the device serviced or upgraded is possible death. It’s almost like being locked into a phone contract where you have the freedom to leave at any time, but contract “termination” comes with more sinister overtones. Almost, but not quite, as it’s not entirely clear if users of ICDs even have the option to terminate their contracts.
In 2007, Ruth and Tim England and John Coggins grappled with this dilemma through the hypothetical case of an ICD in a patient with terminal cancer. The hypothetical they set up was to ask who has the right to deactivate the device, if constant revival in the case of heart failure leads to continued patient distress. The scenario challenges readers of their work to think about the ethics of patient control over such implants, and the degree of control that others should have. Here, things turn out to be murkier than you might think. Depending on how the device is classified, whether it is considered a fully integrated part of the body, for instance, or an ongoing medical intervention, there are legal ramifications to who does what, and how. If, for instance, an ICD is considered simply as an ongoing medical treatment, the healthcare provider is able to decide on its continued use or termination, based on their medical judgment, even if this is against the wishes of the patient. In other words, the patient may own the ICD, but they have no control over its use, and how this impacts them.
On the other hand, if the device is considered to be as fully integrated into the body as, say, the heart itself, a physician will have no more right to permanently switch it off than they have the right to terminally remove the heart. Similarly, the patient does not legally have the right to tamper with it in a way that will lead to death, any more than they could legally kill themselves.
In this case, England and colleagues suggest that intimately implanted devices should be treated as a new category of medical device. They refer to these as “integral devices” that, while not organic, are nevertheless a part of the patient. They go on to suggest that this definition, which lies somewhere between the options usually considered for ICDs, will allow more autonomy on the part of patient and healthcare provider. And specifically, they suggest that “a patient should have the right to demand that his ICD be disabled, even against medical advice.”
England’s work is helpful in thinking through some of the complexities of body implant ethics. But it stops far short of addressing two critical questions: Who has the right to access and control augmentations designed to enhance performance (rather than simply prevent death), and what happens when critical upgrades or services are needed?
This is where we’re currently staring into an ethical and moral vacuum. It might not seem such a big deal when most integrated implants at the moment are health-protective rather than performance-enhancing. But we’re teetering on the cusp of technological advances that are likely to sweep us toward an increasingly enhanced future, without a framework for thinking about who controls what, and who ultimately owns who you are.
This is very clear in emerging plans for neural implants, whether it’s Neuralink’s neural lace or other emerging technologies for connecting your brain to the net. While these technologies will inevitably have medical uses—especially in treating and managing neurological diseases like Parkinson’s disease—the expectation is that they will also be used to increase performance and ability in healthy individuals. And as they are surgically implanted, understanding who will have the power to shut them down, or to change their behavior and performance, is important. As a user, will you have any say in whether to accept an overnight upgrade, for instance? What will your legal rights be when a buggy patch leads to a quite-literal brain freeze? What happens when you’re given the choice of paying for “Neuralink 2.0” or keeping an implant that is no longer supported by the manufacturer? And what do you do when you discover your neural lace has a hardware vulnerability that makes it hackable?
This last question is not idle speculation. In August 2016, the short-selling firm Muddy Waters Capital LLC released a report claiming that ICDs manufactured by St. Jude Medical, Inc.were vulnerable to potentially life-threatening cyberattacks. The report claimed:
“We have seen demonstrations of two types of cyber-attacks against [St Jude] implantable cardiac devices (‘cardiac devices’): a ‘crash’ attack that causes cardiac devices to malfunction— including by apparently pacing at a potentially dangerous rate; and, a battery drain attack that could be particularly harmful to device dependent users. Despite having no background in cybersecurity, Muddy Waters has been able to replicate in-house key exploits that help to enable these attacks.”
St. Jude vehemently denied the accusations, claiming that they were aimed at manipulating the company’s value (the company’s stock prices tumbled as the report was released). Less than a year later, St. Jude was acquired by medical giant Abbott. But shortly after this, hacking fears led to the US Food and Drug Administration recalling nearly half a million former St. Jude pacemakers due to an identified cybersecurity vulnerability.
Fortunately, there were no recorded cases of attacks in this instance, and the fix was a readily implementable firmware update. But the case illustrates just how vulnerable web-connected intimate body enhancements can be, and how dependent users are on the manufacturer. Obviously, such systems can be hardened against attack. But the reality is that the only way to be completely cyber- secure is to have no way to remotely connect to an implanted device. And increasingly, this defeats the purpose for why a device is, or might be, implanted in the first place.
As in the case of the St Jude pacemaker, there’s always the possibility of remotely-applied patches, much like the security patches that seem to pop up with annoying frequency on computer operating systems. With future intimate body enhancements, there will almost definitely be a continuing duty of care from suppliers to customers to ensure their augmentations are secure. But this in turn ties the user, and their enhanced body, closely to the provider, and it leaves them vulnerable to control by the providing company. Again, the scenario is brought to mind of what happens when you, as an enhanced customer, have the choice of keeping your enhancement’s
buggy, security-vulnerable software, or paying for the operating
system upgrade. The company may not own the hardware, but without a doubt, they own you, or at least your health and security.
Things get even more complex as the hardware of implantable devices becomes outdated, and wired-in security vulnerabilities are discovered. On October 21, 2016, a series of distributed denial of service (DDOS) attacks occurred around the world. Such attacks use malware that hijacks computers and other devices and redirects them to swamp cyber-targets with massive amounts of web traffic — so much traffic that they effectively take their targets out. What made the October 21 attacks different is that the hijacked devices were internet-connected “dumb devices”: home routers, surveillance cameras, and many others with a chip allowing them to be connected to the internet, creating an “Internet of Things.” It turns out that many of these devices, which are increasingly finding their way into our lives, have hardware that is outdated and vulnerable to being coopted by malware. And the only foolproof solution to the problem is to physically replace millions — probably billions — of chips.
The possibility of such vulnerabilities in biologically intimate devices and augmentations places a whole new slant on the enhanced body. If your enhancement provider has been so short-sighted as to use attackable hardware, who’s responsible for its security, and for physically replacing it if and when vulnerabilities are discovered? This is already a challenge, although thankfully tough medical device regulations have limited the extent of potential problems here so far. Imagine, though, where we might be heading with poorly-regulated innovation around body-implantable enhancements that aren’t designed for medical reasons, but to enhance ability. You may own the hardware, and you may have accepted any “buyer beware” caveats it came with. But who effectively owns you, when you discover that the hardware implanted in your legs, your chest, or your brain, has to be physically upgraded, and you’re expected to either pay the costs, or risk putting your life and well-being on the line?
Without a doubt, as intimate body-enhancing technologies become more accessible, and consumers begin to clamor after what (bio)tech companies are producing, regulations are going to have to change and adapt to keep up. Hopefully this catch-up will include laws that protect consumers’ quality of life for the duration of having machine enhancements surgically attached or embedded. That said, there is a real danger that, in the rush for short-term gratification, we’ll see pushback against regulations that make it harder for consumers to get the upgrades they crave, and more expensive for manufacturers to produce them.
This is a situation where Ghost on the Shell provides what I suspect is a deeply prescient foreshadowing of some of the legal and social challenges we face over autonomy, as increasingly sophisticated enhancements become available. The question is, will anyone pay attention before we’re plunged into an existential crisis around who we are, and who owns us?
From “Films from the Future: The Technology and Morality of Sci-Fi Movies.” Published by Mango Publishing, November 2018