White House goes all in on responsible innovation and artificial intelligence
A new executive order from the Biden administration sets out to ensure safe, secure, and trustworthy AI -- but what does this mean for institutions aspiring to be at the cutting edge of AI innovation?
Whether you’re a developer, a researcher, an academic institution, or just someone who uses and dabbles in artificial intelligence, it’s hard to ignore just how big of a deal AI safety is these days.
Underlining this, the UK government is hosting its first AI Safety Summit this week. Last week British Prime Minister Rishi Sunak announced a new Artificial Intelligence Safety Institute. In the last few days the United Nations launched a high-level advisory body on artificial intelligence focused on "harnessing “for humanity, while addressing its risks and uncertainties.”
And today, the US government placed an outsized stake in the ground with a new Executive Order on “Safe, Secure, and Trustworthy Artificial Intelligence.” (Update: The full executive order has just been posted here.)
This is most definitely not your run-of-the-mill angst over generative AI (in fact the term doesn’t appear in the executive order’s fact sheet) or AI-enabled “cheating” in class. Rather, it’s a forward-looking approach to navigating some of the most transformative advances in advanced tech in decades — possibly centuries. And it’s one that anyone working at the intersection of transformative technologies, society, and the future, should take seriously.
The Biden-Harris executive order sets out to “ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence”. And it comes with a very explicit message that, to realize the benefits of AI, we have to understand how to navigate the risks — which are deeply novel and highly challenging.
This will come as no surprise to anyone who’s been working at the edge of advanced technologies and their responsible and beneficial development for the past couple of decades. As my colleague Sean Dudley and I wrote recently in the journal Nature Nanotechnology, “the history of technology innovation is clear: transformative technologies always come with unintended and often hard-to-anticipate uses and consequences, and we often ignore or trivialize these at our peril”.
And yet the transformation that AI represents is, in many ways, substantively different from those associated with previous technologies. And as a result it’s encouraging that so many national and international bodies are taking its responsible development and use seriously.
Yet there are deep pitfalls to this newfound awareness of the need to think before we leap into an AI future — including the dangers of regulatory capture, the unintended consequences of narrow thinking around governance and regulation, and a deep lack of awareness around the complex dynamics surrounding integrated sociotechnical systems.
More than this, it’s becoming increasingly clear that the responsible development of AI is deeply intertwined with nearly every other area of human endeavor from combatting climate change to developing resilient critical infrastructure, ensuring secure cyber systems, navigating economic growth, supporting equity and justice, ensuring effective and responsive healthcare, underpinning sustainable development, opening up new pathways to discovery and innovation, and much more.
In other words, AI represents a foundational set of technologies that has profound implications on our future, and demands an equally profound and informed degree of attention if it’s to serve rather than harm society.
This understanding is reflected in the new executive order, and goes a long way to explaining the emphasis on safety — as without this, everything else is placed in jeopardy.
Whether the emphasis is strong enough or informed enough is, as yet, unclear. Yet it is a start — and one that stakeholders the world over can hopefully build on.
In this building process, a number of things are worth highlighting as institutions, developers, researchers and others begin to grapple with what the executive order means for them:
Embracing responsible innovation
Unlike Europe and other areas of the world, the US has been hesitant to embrace the concepts behind responsible innovation — perhaps because there’s still very much an ethos of go fast, break things, and don’t worry about the consequences here. And yet, the seriousness of the potential risks around AI is leading to a renewed emphasis on responsible innovation within the US government.
The hope is that experts in the field step up to the plate with new thinking and tools for applying responsible innovation to AI, and that AI leaders recognize and respect this expertise and insight. This is also an opportunity for leaders in AI development to refresh themselves on approaches to successful and responsible innovation.
Highlighting Public Interest Technology
The past few years have seen growing interest in technology innovation that serves society, and in particular, effective governance. This comes under the umbrella of Public Interest Technology, and is an area that is implicitly and explicitly highlighted in the executive order.
This is most clearly seen in the section on “Ensuring Responsible and Effective Government Use of AI” which emphasizes responsible government deployment of AI as well as modernizing federal AI infrastructure . But the underlying ideas associated with public interest technology are found throughout the document.
Hopefully strong and impactful connections will emerge between beneficial and responsible AI and Public Interest technology as a result of the executive order.
Spotlighting AI and Critical Infrastructure
The executive order is explicit in it’s recognition of the potential threats AI poses to critical infrastructure. For instance, the order states that:
The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety.
(My emphasis). This sends a very strong message to institutions, researchers and others who have expertise in critical infrastructure that they need to be exploring the intersection between these systems and AI with some urgency.
It also opens up substantial opportunities for new research and thinking at this critical nexus.
Advancing equity, justice, and civil rights
The executive order is equally clear that equity and justice are critical components to the responsible development and use of AI. Again, quoting the order:
Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing.
The explicit mention of “irresponsible use” is interesting here as it acknowledges that this is a danger. This is also an important acknowledgement that AI is deeply intertwined within complex sociotechnical ecosystems and that beneficial progress has to include understanding of impacts at the individual and community level.
In other words, there is a need — and a mandate for — experts in social justice to be partners in developing responsible and beneficial AI, and not just add-ons.
Confirming the need for new transdisciplinary thinking, research and scholarship on advanced technology transitions
One of my biggest takeways from the executive order — apart from this being a clarion call for universities, research institutions and others to get their act together around transdisciplinary approaches to beneficial and responsible AI ahead of the inevitable funding opportunities — is the glaring need for new thinking, research, scholarship, and practice, around how we understand and navigate advanced technology transitions.
The executive order is an important step in the right direction of framing the challenges and opportunities presented by the meteoric rise of AI capabilities. But it also reflects a naivety that is, in part, a result of a gaping lack of understanding around how we collectively navigate such transformative technologies.
This naivety includes an over-emphasis on following the lead of large tech companies and their agendas; a lack of representation from a diversity of expertise, perspectives, and insights; a lack of evidence for learning from past technology transitions; a lack of recognition of innovative approaches to governance; and a lack of clear support for innovation in how we think about and understand the transformative nature of AI on society.
However, this in turn presents a whole suite of unique opportunities for research organizations — universities in particular — to fill the gaps.
This includes standing up initiatives that dig deep into what responsible AI means, including studying how transdisciplinary approaches can move the needle in innovative ways on AI safety. It also encompasses exploring how engaging in deeply novel ways across disciplines (including with the arts and humanities) can open up new insights on how to create the futures we aspire to; and taking an intellectual leadership role in how we collectively think about and co-construct a vibrant AI future.
My guess is that many institutions are already strategizing around how they can become intellectual and influential heavyweights in the AI futures arena following the executive order. And global initiatives such as those coming out of the UK, the UN, and elsewhere, will only catalyze these efforts globally as growing concerns translate into funding opportunities.
Hopefully emerging initiatives will be as innovative, as audacious, as creative, and as forward looking as we need them to be, if we’re going to avoid the trap of conventional thinking and vested interests wrapped in the clothing of social responsibility, and truly ensure a vibrant and promise-filled AI-future.
Update: This post was based on the initial executive order fact sheet. The full executive order is now available here.