US White House Embraces Responsible Innovation as Society Faces an AI Tsunami
Today's fact-sheet from the Biden-Harris administration is the clearest indication yet that the US is beginning to understand the societal and economic benefits of responsible innovation
Today’s fact sheet from the White House announcing “New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety” provided a much needed step toward supporting societally beneficial and responsible AI. Whether it’s enough hangs in the balance—I suspect it barely scratches the surface of what will be needed to navigate the advanced technology transition we’re entering. But I was encouraged to see the emphasis on responsible innovation.
The announcement confirms the White House’s commitment to getting the benefits of AI right while ensuring the safety and security of the people and communities it impacts. It also included details of three new initiatives: $140 million in funding for the National Science Foundation to support seven new AI research institutes that “catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI advances that are ethical, trustworthy, responsible, and serve the public good;” A public assessment of existing generative AI systems; and policies to ensure the U.S. government is leading by example on mitigating AI risks and harnessing AI opportunities.
It’s the first of these and the emphasis on “responsible innovation” that particularly caught my attention.
This isn’t the first time the Biden-Harris administration has promoted responsible innovation. The March 2022 Executive Order on ensuring responsible development of digital assets uses similar language. But this is perhaps the strongest endorsement yet of responsible innovation in the development of a transformative new technology.
For those of us who have been involved for years now in developing, working on, and promoting the adoption of the ideas behind responsible innovation, this is a very significant step forward—although it remains to be seen what flavor of responsible innovation the White House pursues.
To many of us working in the field, the framework developed by Jack Stilgoe, Richard Owens and Phil Macnaghten remains a solid starting point. In their 2013 paper they defined responsible innovation as consisting of anticipation, reflexivity, inclusion, and responsiveness.
This framing is useful, but as I explored with my co-author Elizabeth Garbee some years ago, it is fiendishly hard to operationalize.
Hopefully the White House’s commitment to responsible innovation around AI leads to approaches that are both grounded in theory and are highly applicable to the practical development of AI systems that vastly benefit society without causing more problems than they solve.
If the National Science Foundation — and AI developers more broadly — embrace the challenges presented by socially responsible innovation, this could be a game changer. Certainly, it’s hard to see how we can collectively weather the the technology transition we’re entering of there isn’t new and transdisciplinary thinking around responsible innovation.
Interestingly, there is support for relevant research here in the new CHIPS Act. As my colleague Dave Guston writes in Issues in Science and Technology, the Act includes “provisions for societal considerations that, when implemented by agencies, could result in significant transformations to science and innovation policy that may benefit the country for generations” — including a specific provision for the National Science Foundation. Specifically, it '“mandates that NSF engage with the “ethical and societal considerations” of the research it funds.”
The good news is that many AI companies are already investing heavily in responsible AI. This latest initiative from the White House will only serve to boost this. But I still worry that it’s too little too late—a knee-jerk response that should have been (but wasn’t) preceded by years of investment in research, frameworks, interagency initiatives, and public private partnerships around ensuring the societally beneficial emergence of transformative AI technologies.
Hopefully this will be a wakeup call that galvanizes a scale investment in responsible innovation that is needed to ride not only this wave of AI, but the much larger waves that are just over the horizon.
And not just for responsible AI — these early steps toward rethinking how we innovate responsibly are critical if we’re to prepare for other technology transitions that are coming down the pike — including quantum technologies and cognitive technologies.
Because if one thing that AI is teaching us it’s that, if we wait until we have a problem with transformative technologies, we’ve probably waited too long.
You write...
"And not just for responsible AI — these early steps toward rethinking how we innovate responsibly are critical if we’re to prepare for other technology transitions that are coming down the pike — including quantum technologies and cognitive technologies."
So far I see no evidence that any of these well intended efforts are willing to question whether the other technology transitions you reference should come down the pike. It almost always seems to be taken as an obvious given that the knowledge explosion is going to continue to accelerate, and there's nothing we can do about that other than try to adapt to the coming changes. It's interesting how eagerly we embrace the role of helpless victims.
Knowledge development feeds back on itself, resulting in an accelerating pace of knowledge development. If one understands what the word "acceleration" means, it should become obvious that at some point we won't be able to keep up.
The "more is better" relationship with knowledge is a simplistic, outdated and increasingly dangerous 19th century philosophy. Science is racing forward, while our relationship with science clings blindly to the past. Once that is understood, our relationship with "experts" undergoes a transformation.