AI and the lure of permissionless innovation
What could possibly go wrong as we prize the guardrails of responsibility off artificial intelligence and other advanced technologies?
A couple of months ago, Reid Hoffman posted the following on X:
I have a lot of respect for Reid, and I have a good sense of where he’s coming from here. But I found his endorsement of permissionless innovation somewhat jarring — especially as it’s an idea that I’ve cautioned against in my work and writing over the years.
The idea of permissionless innovation goes back some way, but it was succinctly defined by Adam Thierer and Jonathan Camp in 2017 as “the general freedom to innovate without prior constraint.”
A more complete description from Thierer’s 2016 10-point blueprint for permissionless innovation and public policy states that “experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.”1
Anyone who’s familiar with the Silicon Valley mantra of fail fast, fail forward — or the more aggressive version of moving fast and breaking things — will be familiar with the idea of innovating without permission. And I’d be the first to acknowledge that there’s merit in allowing people to make mistakes and learn from them rather than being hyper risk-averse.
And yet, compelling as some of Thierer’s arguments are for loosening the chains of oversight and governance, I have deep concerns about the idea that we can fix any problems that transformative and potentially destructive technologies potentially cause after the fact — rather than anticipating them and navigating around them.2
And while it’s far from the only area of rapid technological advances that gives me pause for thought, the potential consequences of throwing responsibility out of the window with AI in particular — on the assumption that everything will be OK in the end — worries me a lot.
I first wrote about permissionless innovation in 2018 in the book Films from the Future. Reid’s comments prompted me to revisit what I’d written back then — with some trepidation I must confess, given how the world and my own thinking has evolved over the past few years.
On a re-read I was interested to see how much of what I wrote back then still stands — as long as you filter out what now seems a rather naive perspective on Elon Musk, and recognize just how far AI has come along in the intervening seven years.
In fact I think my thinking on permissionless innovation from 2018 is actually more relevant now than it was then, given the recent upswell of enthusiasm for throwing caution to the wind around all things tech-related.3 And so I thought I’d reproduce the relevant excerpt here.
Before I do though, I need to provide some context:
Back in 2018, Elon Musk was still seen by many as a maverick but inspiring entrepreneur who had the audacity to dream big. And artificial general intelligence — or AGI — felt like a technology that was still a long way off. Despite this, Musk and others were concerned enough about the potential dangers of AGI that they were advocating that researchers and developers proceed with caution.
In writing about permissionless innovation at the time, I set out to explore the frisson between the potential promise and perils of AI through a couple of lenses.
The first was through the 2014 Alex Garland film Ex Machina, which centers on an AGI-enabled robot (Ava), created by an egocentric and mega-wealthy entrepreneur (Nathan Bateman).
And the second was Plato’s allegory of the cave, which explores the divide between our narrow perceptions and understanding of the world we inhabit, and a larger reality that we’re a part of.
With that, here is that excerpt from chapter eight of Films from the Future:4
The Lure of Permissionless Innovation
From Films from the Future: The Technology and Morality of Sci-Fi Movies. Andrew Maynard, 2018
On December 21, 2015, Elon Musk’s company SpaceX made history by being one of the first to successfully land a rocket back on Earth after sending it into space.5 On the same day, Musk—along with Bill Gates and the late Stephen Hawking—was nominated for the 2015 Luddite Award.6 Despite his groundbreaking technological achievements, Musk was being called out by the Information Technology & Innovation Foundation (ITIF) for raising concerns about the unfettered development of AI.
Musk, much to the consternation of some, has been and continues to be, a vocal critic of unthinking AI development. It’s somewhat ironic that Tesla, Musk’s electric-car company, is increasingly reliant on AI-based technologies to create a fleet of self-driving, self-learning cars. Yet Musk has long argued that the potential future impacts of AI are so profound that great care should be taken in its development, lest something goes irreversibly wrong—like, for instance, the emergence of super-intelligent computers that decide the thing they really can’t stand is people.
While some commentators have questioned Musk’s motives (he has a vested interest in developing AI in ways that will benefit his investments), his defense of considered and ethical AI development is in stark contrast to the notion of forging ahead with new innovations without first getting a green light from anyone else. And this leads us to the notion of “permissionless innovation.”
In 2016, Adam Thierer, a member of the Mercatus Center at George Mason University, published a ten-point blueprint for “Permissionless Innovation and Public Policy.”7 The basic idea behind permissionless innovation is that experimentation with new technologies (and business models) should generally be permitted by default, and that, unless a compelling case can be made for serious harm to society resulting from the innovation, it should be allowed to “continue unabated.” The concept also suggests that any issues that do arise can be dealt with after the fact.
To be fair, Thierer’s blueprint for permissionless innovation does suggest that “policymakers can adopt targeted legislation or regulation as needed to address the most challenging concerns where the potential for clear, catastrophic, immediate, and irreversible harm exists.” Yet it still reflect an attitude that scientists and technologists should be trusted and not impeded in their work, and that it’s better to ask for forgiveness than permission in technology innovation. And it’s some of the potential dangers of this approach to innovation that Ex Machina reveals through the character of Nathan Bateman.
Nathan is, in many ways, a stereotypical genius mega-entrepreneur. His smarts, together with his being in the right place at the right time (and surrounded by the right people), have provided him with incredible freedom to play around with new tech, with virtually no constraints. Living in his designer house, in a remote and unpopulated area, and having hardly any contact with the outside world, he’s free to pursue whatever lines of innovation he chooses. No one needs to give him permission to experiment.
Without a doubt, there’s a seductive lure to being able to play with technology without others telling what you can and cannot do. And it’s a lure that has its roots in our innate curiosity, our desire to know, and understand, and create.
As a lab scientist, I was driven by the urge to discover new things. I was deeply and sometimes blindly focused on designing experiments that worked, and that shed new light on the problems I was working on. Above all, I had little patience for seemingly petty barriers that stood in my way. I’d like to think that, through my research career, I was responsible. And through my work on protecting human health and safety, I was pretty tuned in to the dangers of irresponsible research. But I also remember the times when I pushed the bounds of what was probably sensible in order to get results.
There was one particularly crazy all-nighter while I was working toward my PhD, where I risked damaging millions of dollars of equipment by bending the rules, because I needed data, and I didn’t have the patience to wait for someone who knew what they were doing to help me. Fortunately, my gamble paid off—it could have easily ended badly, though. Looking back, it’s shocking how quickly I sloughed off any sense of responsibility to get the data I needed. This was a pretty minor case of “permissionless innovation,” but I regularly see the same drive in other scientists, and especially in entrepreneurs—that all-consuming need to follow the path in front of you, to solve puzzles that nag at you, and to make something that works, at all costs.
This, to me, is the lure of permissionless innovation. It’s something that’s so deeply engrained in some of us that it’s hard to resist. But it’s a lure that, if left unchecked, can too often lead to dark and dangerous places.
By calling for checks and balances in AI development, Musk and others are attempting to govern the excesses of permissionless innovation. Yet I wonder how far this concern extends, especially in a world where a new type of entrepreneur is emerging who has substantial power and drive to change the face of technology innovation, much as Elon Musk and Jeff Bezos are changing the face of space flight.
AI is still too early in its development to know what the dangers of permissionless innovation might be. Despite the hype, AI and AGI (Artificial General Intelligence) are still little more than algorithms that are smart within their constrained domains, but have little agency beyond this. Yet the pace of development, and the increasing synergies between cybernetic substrates, coding, robotics, and bio-based and bio-inspired systems, are such that the boundaries separating what is possible and what is not are shifting rapidly. And here, there is a deep concern that innovation with no thought to consequences could lead to irreversible and potentially catastrophic outcomes.
In Ex Machina, Nathan echoes many other fictitious innovators in this book: John Hammond in Jurassic Park (chapter two), Lamar Burgess in Minority Report (chapter four), the creators of NZT in Limitless (chapter five), Will Caster in Transcendence (chapter nine), and others. Like these innovators, he considers himself above social constraints, and he has the resources to act on this. Money buys him the freedom to do what he wants. And what he wants is to create an AI like no one has ever seen before.
As we discover, Nathan realizes there are risks involved in his enterprise, and he’s smart enough to put safety measures in place to manage them. It may not even be a coincidence that Ava comes into being hundreds of miles from civilization, surrounded by a natural barrier to prevent her escaping into the world of people. In the approaches he takes, Nathan’s actions help establish the idea that permissionless innovation isn’t necessarily reckless innovation. Rather, it’s innovation that’s conducted in a way that the person doing it thinks is responsible. It’s just that, in Nathan’s case, the person who decides what is responsible is clearly someone who hasn’t thought beyond the limit of his own ego.
This in itself reveals a fundamental challenge with such unbounded technological experimentation. With the best will in the world, a single innovator cannot see the broader context within which they are operating. They are constrained by their understanding and mindset. They, like all of us, are trapped in their own version of Plato’s Cave, where what they believe is reality is merely their interpretation of shadows cast on the walls of their mind. But, unlike Plato’s prisoners, they have the ability to create technologies that can and will have an impact beyond this cave. And, to extend the metaphor further, they have the ability to create technologies that are able to see the cave for what it is, and use this to their advantage.
This may all sound rather melodramatic, and maybe it is. Yet perhaps Nathan’s biggest downfall is that he had no translator between himself and a bigger reality. He had no enlightened philosopher to guide his thinking and reveal to him greater truths about his work and its potential impacts. To the contrary, in his hubris, he sees himself as the enlightened philosopher, and in doing so he becomes mesmerized and misled by shadow-ideas dancing across the wall of his intellect.
This broader reality that Nathan misses is one where messy, complex people live together in a messy, complex society, with messy, complex relationships with the technologies they depend on. Nathan is tech-savvy, but socially ignorant. And, as it turns out, he is utterly naïve when it comes to the emergent social abilities of Ava. He succeeds in creating a being that occupies a world that he cannot understand, and as a result, cannot anticipate.
Things might have turned out very differently if Nathan had worked with others, and if he’d surrounded himself with people who were adept at seeing the world as he could not. In this case, instead of succumbing to the lure of permissionless innovation, he might have accepted that sometimes, constraints and permissions are necessary. Of course, if he’d done this, Ex Machina wouldn’t have been the compelling movie it is. But as a story about the emergence of enlightened AI, Ex Machina is a salutary reminder that, sometimes, we need other people to help guide us along pathways toward responsible innovation.
There is a glitch in this argument, however. And that’s the reality that, without a gung-ho attitude toward innovation like Nathan’s, the pace of innovation—and the potential good that it brings—would be much, much slower. And while I’m sure some would welcome this, many would be saddened to see a slowing down of the process of turning today’s dreams into tomorrow’s realities.
Technologies of Hubris
This tension, between going so fast that you don’t have time to think and taking the time to consider the consequences of what you’re doing, is part of the paradox of technological innovation. Too much blind speed, and you risk losing your way. But too much caution, and you risk achieving nothing. By its very nature, innovation occurs at the edges of what we know, and on the borderline between success and failure. It’s no accident that one of the rallying cries of many entrepreneurs is “fail fast, fail forward.”8
Innovation is a calculated step in the dark; a willingness to take a chance because you can imagine a future where, if you succeed, great things can happen. It’s driven by imagination, vision, single-mindedness, self-belief, creativity, and a compelling desire to make something new and valuable. Innovation does not thrive in a culture of uninspired, risk-averse timidity, where every decision needs to go through a tortuous path of deliberation, debate, authorization, and doubt. Rather, seeking forgiveness rather than asking permission is sometimes the easiest way to push a technology forward.
This innovation imperative is epitomized in the character of Nathan in Ex Machina. He’s managed to carve out an empire where he needs no permission to flex his innovation muscles. And because of this—or so we are led to believe—he has pushed the capabilities of AGI and autonomous robots far beyond what anyone else has achieved. In the world of Nathan, he’s a hero. Through his drive, vision, and brilliance, he’s created something unique, something that will transform the world. He’s full of hubris, of course, but then, I suspect that Nathan would see this as an asset. It’s what makes him who he is, and enables him to do what he does. And drawing on his hubris, what he’s achieved is, by any standard, incredible.
Without a doubt, the technology in Ex Machina could, if developed responsibly, have had profound societal benefits. Ava is a remarkable piece of engineering. The way she combines advanced autonomous cognitive abilities with a versatile robotic body is truly astounding. This is a technology that could have laid the foundations for a new era in human-machine partnerships, and that could have improved quality of life for millions of people. Imagine, for instance, an AI workforce of millions designed to provide medical care in remote or deprived areas, or carry out search-and-rescue missions after natural disasters. Or imagine AI classroom assistants that allow every human teacher to have the support of two or three highly capable robotic support staff. Or expert AI-based care for the elderly and infirm that far surpasses the medical and emotional support an army of healthcare providers are able to give.
This vision of a future based around human-machine partnerships can be extended even further, to a world where an autonomous AI workforce, when combined with a basic income for all, allows people to follow their dreams, rather than being tied to unfulfilling jobs. Or a world where the rate of socially beneficial innovation is massively accelerated, as AIs collaborate with humans in new ways, revealing approaches to addressing social challenges that have evaded our collective human minds for centuries.
And this is just considering AGIs embedded in a cybernetic body. As soon as you start thinking about the possibilities of novel robotics, cloud-based AIs, and deeply integrated AI-machine systems that are inspired by Nathan’s work, the possibilities begin to grow exponentially, to the extent that it becomes tempting to argue that it would be unethical not to develop this technology.
This is part of the persuasive power of permissionless innovation. By removing constraints to achieving what we imagine the future could be like, it finds ways to overcome hurdles that seem insurmountable with more constrained approaches to technology development, and it radically pushes beyond the boundaries of what is considered possible.
This flavor of permissionless innovation—while not being AI-specific—is being seen to some extent in current developments around private space flight. Elon Musk’s SpaceX, Jeff Bezos’ Blue Origin, and a handful of other private companies are achieving what was unimaginable just a few years ago because they have the vision and resources to do this, and very few people telling them what they cannot do. And so, on September 29, 2017, Elon Musk announced his plans to send humans to Mars by 2024 using a radical design of reusable rocket—something that would have been inconceivable a year or so ago.9
Private space exploration isn’t quite permissionless innovation; there are plenty of hoops to jump through if you want permission to shoot rockets into space. But the sheer audacity of the emerging technologies and aspirations in what has become known as “NewSpace” is being driven by very loosely constrained innovation. The companies and the mega-entrepreneurs spearheading it aren’t answerable to social norms and expectations. They don’t have to have their ideas vetted by committees. They have enough money and vision to throw convention to the wind. In short, they have the resources and freedom to translate their dreams into reality, with very little permission required.10
The parallels with Nathan in Ex Machina are clear. In both cases, we see entrepreneurs who are driven to turn their science-fiction-sounding dreams into science reality, and who have access to massive resources, as well as the smarts to work out how to combine these to create something truly astounding. It’s a combination that is world-changing, and one that we’ve seen at pivotal moments in the past where someone has had the audacity to buck the status quo and change the course of technological history.
Of course, all technology geniuses stand on the shoulders of giants. But it’s often individual entrepreneurs operating at the edge of permission who hold the keys to opening the floodgates of history-changing technologies. And I must admit that I find this exhilarating. When I first saw Elon Musk talking about his plans for interplanetary travel, my mind was blown. My first reaction was that this could be this generation’s Sputnik moment, because the ideas being presented were so audacious, and the underlying engineering was so feasible. This is how transformative technology happens: not in slow, cautious steps, but in visionary leaps.
But it also happens because of hubris—that excessive amount of self-confidence and pride in one’s abilities that allows someone to see beyond seemingly petty obstacles or ignore them altogether. And this is a problem, because, as exciting as technological jumps are, they often come with a massive risk of unintended consequences. And this is precisely what we see in Ex Machina. Nathan is brilliant. But his is a very one-dimensional brilliance. Because he is so confident in himself, he cannot see the broader implications of what he’s creating, and the ways in which things might go wrong. He can’t even see the deep flaws in his unshakable belief that he is the genius-master of a servant-creation.
For all the seductiveness of permissionless innovation, this is why there need to be checks and balances around who gets to do what in technological innovation, especially where the consequences are potentially widespread and, once out, the genie cannot be put back in the bottle.
In Ex Machina, it’s Nathan’s hubris that is ultimately his downfall. Yet many of his mistakes could have been avoided with a good dose of humility. If he’d not been such a fool, and he’d recognized his limitations, he might have been more willing to see where things might go wrong, or not go as he expected, and to seek additional help.
Several hundred years and more ago, it was easier to get away with mistakes with the technologies we invented. If something went wrong, it was often possible to turn the clock back and start again—to find a pristine new piece of land, or a new village or town, and chalk the failure up to experience.11 From the Industrial Revolution on, though, things began to change. The impacts of automation and powerful new manufacturing technologies on society and the environment led to hard-to-reverse changes. If things went wrong, it became increasingly difficult to wipe the slate clean and start afresh. Instead, we became increasingly good at learning how to stay one step ahead of unexpected consequences by finding new (if sometimes temporary) technological solutions with which to fix emerging problems.
Then we hit the nuclear and digital age, along with globalization and global warming, and everything changed again. We now live in an age where our actions are so closely connected to the wider world we live in that unexpected consequences of innovation can potentially propagate through society faster than we can possibly contain them. These consequences increasingly include widespread poverty, hunger, job losses, injustice, disease, and death. And this is where permissionless innovation and technological hubris become ever more dangerous. For sure, they push the boundaries of what is possible and, in many cases, lead to technologies that could make the world a better place. But they are also playing with fire in a world made of kindling, just waiting for the right spark.
This is why, in 2015, Musk, Hawking, Gates, and others were raising the alarm over the dangers of AI. They had the foresight to point out that there may be consequences to AI that will lead to serious and irreversible impacts and that, because of this, it may be expedient to think before we innovate. It was a rare display of humility in a technological world where hubris continues to rule. But it was a necessary one if we are to avoid creating technological monsters that eventually consume us.
Thierer’s framing of permissionless innovation was, in part, a reaction against largely US-based interpretations of the precautionary principle, which he saw as unnecessarily restrictive. This is a whole other rabbit hole that I won’t dive down here, but the history of the precautionary principle and US-Europe politics is one that’s fraught with misunderstanding, misinterpretation, and US-based accusations of using precaution as an excuse to raise trade barriers.
Context is everything here. Experimenting and making mistakes in a low-risk linear system where it’s relatively easy to turn the clock back and try again is one thing (experimenting with weird ingredient combinations in cooking for instance). Breaking things that aren’t easily fixable — especially in complex systems where the results of experimentation are unpredictable and potentially catastrophic — is probably not such a good idea. I’d put breaking people, governance, society, and the planet, in this category!
It’s not too much of a stretch to see what Elon Musk and the Department Of Government Efficiency are currently executing within the federal government as a rather naive and uninformed application of permissionless innovation.
I have previously published the complete chapter on this Substack, but am posting this excerpt again as the context is a little different than it was first time round.
Musk’s Falcon 9 wasn’t the first rocket to successfully return to Earth by landing vertically—that award goes to Jeff Bezos’ New Shepard rocket. But it was the first to combine both reaching a serious altitude (124 miles) and a safe return-landing.
For more on Musk and his Luddite award, see “If Elon Musk is a Luddite, count me in!,” published December 23, 2015, in The Conversation. https://theconversation.com/if-elon-musk-is-a-luddite-count-me-in-52630
Thierer’s blueprint can be downloaded here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2761139 (This is updated from the original link in the book).
In 2013, entrepreneur, educator, and author Steve Blank published the best-seller “The Four Steps to the Epiphany” (published by K&S Ranch). It’s been credited with starting the lean-startup movement which, among other things, embraces the idea of failing fast and failing forward.
See “Dear Elon Musk: Your dazzling Mars plan overlooks some big nontechnical hurdles.” Published in The Conversation, October 1 2017. https://theconversation.com/dear-elon-musk-your-dazzling-mars-plan-overlooks-some-big-nontechnical-hurdles-84948
As if to epitomize this, on February 6, 2018, Elon Musk launched his personal cherry-red Tesla roadster into heliocentric orbit on the first test flight of the SpaceX Falcon Heavy rocket—just because he could.
To be clear, while it was often easier to bury local problems caused by technology gone wrong in the past, the impacts on individuals and local commuters were still devastating in many cases. It’s simply that they were more containable.