As AI goes to Washington, what's being missed?
Lawmakers are scrambling to make sense of the pace at which AI is developing — but are they asking the right questions and talking to the right people?
Lawmakers on Capitol Hill are, it seems, waking up to the disruptive potential of artificial intelligence. But whether growing concerns around AI will be channeled into smart policies is, as yet, far from clear — especially given Congress’ track record around issues like social media.
This was highlighted recently in an article by Cat Zakrzewski in the Washington Post. Zakrzewski focuses on growing concerns amongst policy makers that the private sector may not be able to contain the potential risks of AI — including the disruptive impacts of platforms like ChatGPT. However, laudable as these concerns are, I wonder if lawmakers are being sufficiently creative in how they approach a challenge that is anything but conventional.
In particular, three questions nagged at me while reading the piece:
Who’s at the table and who’s driving the agenda?
According to Zakrzewski, last week saw a delegation of bipartisan lawmakers traveling to Silicon Valley to meet with top tech executives and venture capitalists.
Appropriately for Silicon Valley, they convened for lunch at Stanford University with experts from Google, Palantir, and presumably other companies, as they were briefed on the current wave of AI risks and benefits.
What is unclear is who else these lawmakers are talking with, and whether this includes experts in the governance of emerging technologies who aren’t part of the Silicon Valley echo chamber.
While California may feel like important as a hub of AI innovation, the current surge of capabilities is grounded in national and global innovation. And the expertise needed to navigate the emerging advanced technology transition AI represents is, likewise, global.
This obsession with Silicon Valley was counterbalanced somewhat in a recent meeting between President Biden and his Council of Science and Technology Advisors to address AI risks and policy. PCAST has a pretty diverse roll call of members, and I’d hope that they bring a broader perspective to emerging challenges than a group of Silicon Valley insiders.
However, I do worry that there is still an “insights vacuum” around the beneficial and responsible development of AI that is rapidly being filled by tech companies, interest groups, and loud (but not necessarily informed) voices—all of whom are pushing agendas that are not necessarily aligned with long term societal good.
How can measured thinkers compete with loud voices?
Part of the challenge here is that business leaders, activists, advocacy organizations, and influential public figures, can and do move fast to drive agendas and influence decisions. And as they do, they risk drowning out important ideas and perspectives from more measured thinkers and experts.
As Zakrzewski points out, “Consumer advocates and tech industry titans are converging on D.C., hoping to sway lawmakers in what will probably be the defining tech policy debate for months or even years to come.” The Washington Post even begins by referencing an influential video posted by Tristian Harris (of The Social Dilemma fame) and how it spurred Senator Chris Murphy (D-Conn) to action.
These “loud voices” are an important part of the policy landscape around grappling with emerging issues. But they also skew conversations—often to the detriment of diverse perspectives and expertise being heard. And where a technology is progressing as fast and with as much potential for disruption as AI, this risks introducing deep vulnerabilities into the process of identifying productive pathways forward.
In contrast, some of the deepest and most measured thinkers and experts around the development of beneficial and responsible AI do not have similarly “loud voices.” Unlike many of the the more prominent figures dominating conversations on social media and in policy circles, they don’t have the capacity, support, time, or luxury, to drop everything and work the D.C. circuit.
These are the academics who study advanced technology transitions, technology policy, social equity and justice, principled and responsible innovation, governance, and a whole host of other relevant topics, and yet have limited visibility outside their fields. They are the developers within companies who are asking questions about the broader implications of their work, and yet are buried within layers of institutional structure. They are experts in fields that no-one has realized yet have something important to bring to the table when it comes to beneficial AI. They are representatives of communities who stand to be disproportionately impacted by emerging technologies and policy decisions, and yet struggle to be heard. And they are ordinary people who sometimes have extraordinary insights, but no way to engage with people who might benefit from these.
Unless these “quiet voices” and the diversity of perspectives, ideas, and expertise they represent, are part of exploring the pathway forward to beneficial AI, decisions will be dominated by what the loudest voices say we collectively need.
And I must confess that I’m not sure I trust people who have an agenda, who can mobilize fast, and who have the ear of decision makers, to get things right.
How are novel approaches to governance being considered?
The final thing that struck in reading the Washington Post article was the absence of thinking around alternatives to government regulation.
Of course, a robust, agile, and responsive regulatory framework for advanced technologies, is important. Yet government regulations are a blunt tool for addressing complex and fast moving innovation, and experts have long recognized that they only work effectively within a broader landscape of governance options.
These often come under headings like “soft law” and “agile governance.” These are approaches to steering emerging technologies in directions that increase the benefits while reducing the potential risks in agile and responsive ways.
For an introduction to soft law as it applies to AI it’s worth reading Marchant, Tournas and Ignacio Gutierrez’s paper on governing emerging technologies like AI through soft law. This lays out the pros and cons of such approaches, as well as the need to consider them seriously as AI begins to take off.
Agile governance provides a similar framing for thinking more fluidly about governance and regulation. Here, a good starting point is the World Economic Forum strategic intelligence map on agile governance.
For a more practical set of agile governance tools, Agile Regulation for the Fourth Industrial Revolution A Toolkit for Regulators (also published by WEF) provides a useful place to start. The toolkit explores governance mechanisms such as anticipatory regulation, outcomes-focused regulation, co-regulation, and even experimental regulation.
Both soft law and agile governance are responses to the limitations that government regulations have in the face of transformative and fast-moving new technologies. They recognize that such regulations alone cannot address the risks of emerging technologies like AI, and that complex capabilities need integrated and sophisticated responses.
I worry though that in a rush toward government policy and regulation that is driven along by loud voices and vested interests, critical tools and ideas like are being lost. Certainly, the reporting in the Washington Post article doesn’t indicate any sophistication of thought beyond using old tricks to reign in new technologies.
Hopefully though this is just a temporary glitch, and we’ll see lawmakers begin to explore how broader approaches to governance can help navigate the technology transition between where we are now, and a future where AI transforms society for the better.
Yes, AI experts are certainly knowledgable about how AI works, but they can hardly be objective about questions such as whether the AI industry should exist, or how fast or deep it should proceed etc.
https://www.tannytalk.com/p/the-future-of-ai
You write, "And I must confess that I’m not sure I trust people who have an agenda, who can mobilize fast, and who have the ear of decision makers, to get things right."
Indeed, their goal is to get things right for themselves.
You write, "Of course, a robust, agile, and responsive regulatory framework for advanced technologies, is important. "
I just watched the excellent Netflix documentary about the Bernie Madoff scam. A key take away was how ***spectacularly*** incompetent SEC regulation was. They were handed the case on a silver platter multiple times over a period of years, and never moved on it. It was only the 2008 financial crisis that finally brought Madoff down. Point being, talk of regulation and governance schemes etc may be mostly a method of pacifying the public while the AI industry is pushed past the point of no return.
What I don't hear from anybody anywhere is an answer to this question:
What are the compelling benefits of AI which justify taking on yet another risk at a time when we already face so many?
Here's what we'll hear instead:
1) Don't worry about AI because it's not really a threat yet.
2) Once it is a threat, we'll hear this: Don't worry about AI because it's too late to do anything about it.
If we want to better understand what's going to happen with AI we shouldn't listen to industry experts, but to the history of nuclear weapons, which documents in sobering detail our ability to manage technologies of awesome scale.
I like your article, but might advise more caution regarding being "reasonable", and "realistic". None of that has worked with nuclear weapons.