The Many Ways Elon Musk’s Neuralink Could Go Wrong
The brain-machine interface will require innovative ways of thinking about risk. We mapped it out.
Several weeks ago, I was asked to write a commentary for the Journal of Medical Internet Research (JMIR) on the ethics of Elon Musk and Neuralink’s much-touted brain-machine interface. JMIR had accepted Musk’s paper on the technology for publication and wanted to accompany its release with a number of invited papers exploring different aspects of the technology.
That commentary has just been published alongside Musk and Neuralink’s paper. Rather than writing another commentary on the ethical challenges of cutting-edge brain tech, I set out to apply our work around risk innovation to the development of the technology. And as a result, my colleague Marissa Scragg and I carried out a unique assessment of what it might take for Musk and Neuralink to create a product that is good for society as well as the company’s bottom line.
What is risk innovation?
Risk innovation is a concept acknowledging that emerging technologies and trends are creating a risk landscape that is so different from what existed in the past that conventional ways of thinking about risk are simply not up to the task of navigating them.
Just as innovation is ultimately about creating value through the practical application of creativity to unmet needs and opportunities, risk innovation focuses on the creation of value through creative approaches to potential dangers and pitfalls.
This is at the heart of the Arizona State University Risk Innovation Nexus, where we’re transforming the ideas behind risk innovation into practical tools and resources to help organizations plan for and navigate potentially blindsiding risks. We call these “orphan risks” as they are often ignored, yet are frequently pivotal to an enterprise’s success or failure. And many of them involve hard-to-quantify social risks that have a tendency to upset an organization’s pathway to success — risks such as the consequences of ethical missteps, threats to privacy and autonomy, or actions that undermine social justice.
In the Risk Innovation Nexus, we start with “value” — what’s important to an enterprise, as well as to its investors, customers, and communities. We then ask what orphan risks might potentially threaten current and future value. (We have a list of 18 of them that we’ve found especially helpful in identifying potential issues.) This allows us to map out a landscape that highlights risks that would otherwise be missed, and to develop strategies for navigating these risks.
Risk innovation and brain-machine interface tech
Rather than critique brain-machine interface technology on ethical grounds, we asked what the orphan risks landscape might look like for advanced brain-machine interface developers, and how they might start planning early on to successfully navigate it. In doing so, we were able to highlight areas where companies like Neuralink have the opportunity to increase their chances of success because they are aware of and responsive to how their technology might impact others or be perceived as impacting them.
In effect, our analysis provided a unique insight into how ethical and socially responsible innovation can increase the chances of being successful in the case of advanced brain-machine interfaces.
You can read our full analysis in the open-access commentary. The short version is that, based on our assessment of critical areas of value to Neuralink, as well as the company’s investors, their future customers, and the communities the company and their technology are likely to touch, there are a number of areas beyond the technology and its regulatory approval that the company should consider addressing sooner rather than later to increase their chances of success.
The analysis draws from a “risk landscape” (below) that highlights key orphan risks and risk clusters, based on identified areas of value. This is, of course, a subjective process, and one that needs to be treated with some caution. However, it is also one that reveals potential pitfalls and blindsides that would otherwise be missed and helps alert organizations to risks that are all too easily overlooked.
This mapping exercise reveals insights into orphan risks that are probably worth Neuralink and similar enterprises preparing for. One important factor will be how investors, users, and others will perceive the potential workings and impacts of advanced brain-machine interfaces. Something else to pay attention to: social trends that may either create opportunities for development or potential barriers. (For instance, if there is a public backlash against brain-machine interface-based augmentation.)
Not surprisingly for an invasive neural read-write technology, our risk analysis highlights novel health impacts and “black swan” events — low probability but high-impact risks that aren’t predictable based on past experience. In this case, the mapping suggests attention should be paid to the agility of the enterprise and the technology with respect to navigating around unexpected hurdles.
Reading the map above from left to right, there is a greater density of orphan risks identified under the “organizations and systems” category, suggesting that some of the greatest potential threats to successful advanced brain-machine interface technologies may lie within the company itself and the formal organizations governing how it operates. These include the degree to which organizational values and culture align with how the technology is developed and used, evolving regulations that may play an outsized influence on development pathways, and national and international standards that guide and limit technology performance and use.
The risk map helps identify risks that cluster and converge in ways that increase the chances of truly blindsiding impacts.
The map also provides insights into potential threats to value within key constituencies that may, in turn, lead to barriers to success. Here, orphan risks that do not directly impact Neuralink, but are nevertheless likely to influence its success, include the ethics surrounding how the technology is developed and used, how it potentially leads to social injustice, and whether it raises privacy concerns. This broader perspective on orphan risks also raises other issues such as how trustworthy the enterprise is perceived to be and the dangers of “bad actors” giving the technology a negative reputation that risks poisoning the market.
Finally, the risk map helps identify risks that cluster and converge in ways that increase the chances of truly blindsiding impacts. Risks associated with ethics, health, governance, and regulation dominate the landscape, and if not planned for, could build upon each other to create insurmountable obstacles to value creation. Instead of planning for just one risk at a time, addressing these clusters could have the potential to provide insights into where resources are best focused to protect value to innovators and investors, as well as users and the communities they are a part of.
The bottom line
This application of our risk innovation approach to Neuralink’s tech indicates that there’s a crowded risk landscape between the current state of the technology and its successful development and use, and that many of these risks pivot on how companies address potentially serious ethical and social issues. As we discuss in the paper, these include the potential for the technology to widen the gap between the rich and poor (and the privileged and the marginalized); to challenge social norms around the use of enhancement technologies; and to raise complex issues around data privacy, user security, and autonomy — especially where manufacturers retain ownership of implanted brain-machine interfaces or their operation and upkeep is dependent on a subscription service, for instance. They also highlight the need for developers and others to consider with some seriousness how they build trust with the communities they depend on and actions that may erode trust.
By mapping out the orphan risk landscape they face — and iterating frequently as the landscape shifts and changes — companies like Neuralink are more likely to succeed in developing advanced brain-machine interfaces that lead to powerful therapeutic interventions and even transformative enhancement capabilities, while simultaneously being economically successful, ethical, and socially responsible. Yet the bottom line is that this will only happen if innovators like Musk, and the people who work for them, are capable of looking beyond technical performance and conventional risks, and cultivating a broader understanding of the risk landscape they face and the approaches they need to adopt in order to successfully navigate it.
Based on Maynard AD, Scragg M. The Ethical and Responsible Development and Application of Advanced Brain-Machine Interfaces. J Med Internet Res 2019;21(10):e16321 DOI: 10.2196/16321