Some thoughts on yesterday's historic Senate Judiciary Committee hearing on oversight of AI
Yesterday, three witnesses — two from the private sector, one from the public sector — testified before a packed hearing of the US Senate Committee on the Judiciary subcommittee on privacy, technology, and the law. The topic: Oversight of A.I.: Rules for Artificial Intelligence.
The hearing — which is expected to be one of a number — was prompted by the rapid emergence of transformative AI capabilities, most noticeably seen in the development and use of large language models and interfaces like ChatGPT, together with growing concerns around the gap between AI capabilities and the technology’s safe and responsible development and use.
It’s going to take some time to parse though all the nuances of yesterday’s discussions, but even so I suspect it will be seen as a milestone in AI development and regulation — not least because, as was noted repeatedly, industry reps were sitting before policy makers asking to be regulated!
The hearing was also a timely reminder that not everything in the US government is broken. The questions were (on the whole) sharp, insightful, and informed; the dialogue was bipartisan and collegiate; and the discussion constructive.
It took me back with some nostalgia to past tech-related hearings I’ve been involved in!
I suspect that there will be a lot of unpacking in the coming days and weeks of what was said, what wasn’t said, and who was and wasn’t at the table . But I did want to capture my initial thoughts here while they are fresh in my mind.
Who was at the table
All three witnesses — Sam Altman (CEO of Open AI), Christina Montgomery (Chief Privacy & Trust Officer at IBM) and Gary Marcus (Professor Emeritus at New York University) — brought considerable expertise to the table, and fielded questions well. They were also smart enough to admit where a question went beyond their domain of expertise. Yet if I’m honest, they brought a relatively narrow perspective to the responsible and beneficial development and use of AI.
What was missing was a broader perspective and expertise on governance of AI, agile and informed policy making, regulation of emerging technologies, technology assessment, the deeply complex dynamics around technology innovation, public perception, and societal impacts, and responsible AI more widely.
There was also a notable lack of representation from civil society, and groups who are working across sectors to understand how to navigate the AI technology transition in ways that benefit as many people as possible — including underrepresented communities.
Hopefully these stakeholders and experts will be brought in to later hearings. And just as importantly, I hope they will be proactive in working with staffers to ensure their voices and perspectives are included and heard. Because without rapid and widespread engagement, there’s an increasingly likelihood that oversight pathways forward will be defined and locked in by very limited set of ideas coming from a very small group of players.
And this will almost definitely lead to decisions that, even with the best of intentions, decrease the chances of beneficial AI emerging while increasing the chances of harm.
Calls for AI-specific regulation
All three witnesses advocated for AI-specific regulation, claiming that existing regulatory structures are not up to the task of ensuring the safety of what’s coming.
This in itself was interesting — three people who openly and strongly advocate for the potential benefits of AI expressing deep concerns over the consequences of innovation without appropriate guardrails, and asking help from the US government to put these in place!
This is perhaps a testament to just how powerful emerging AI technologies are — it’s possibly this generation’s “atomic technologies moment” where even the most ardent advocates of the technology worry about the devastation that could be caused if things go wrong.
Or maybe it’s just a cynical move to ensure that AI regulations favor first movers and large corporations — it wouldn’t be the first time that key industry players have helped defined an exclusionary playing field through regulations that they are the best-equipped to comply with.
In this case I don’t think this is happening — at least, not yet. Sam Altman comes across as genuinely sincere in his request for help in ensuring that the benefits of AI far outweigh the risks. And I see the same level of sincerity in others as they ask what can be done to ensure researchers and developers don’t inadvertently make a mess of things.
I’m just not sure that invoking AI-specific regulation is the right way to go at this point — if only because hard law regulations are crude, cumbersome, need a well-defined subject (and I think AI is still a moving target), and have a habit of creating more problems than they solve if not done well.
IBM’s Christina Montgomery was more nuanced here, calling for “precision regulation.” This has been part of IBM’s regulatory strategy for AI for some time. Back in 2020, IBM’s then-CEO Ginni Rometty was advocating for “precision regulation” that focused on specific uses of AI while balancing benefits and risks, rather than the technology itself.
This focus on what a technology does rather than what it is, is a relatively well-versed approach to regulating emerging technologies. That said, whether it will work for something as transformative as AI is far from clear. I suspect that emerging capabilities may well erode convenient distinctions between the technology and its uses. But it does demonstrate that calls for regulation are not enough, and that they have to be accompanied by deep discussion on what type of regulation, what gets regulated and what does not, how you tell whether something should be regulated, and you act based on this.
In other words, regulation is complex, and really needs to be guided by people who know about regulation and science and technology policy. It also has a habit of taking a long time to formulate, and even longer to work out the kinks in its execution.
An AI regulatory agency
The idea of a new AI Regulatory agency (or similar) came up a number of times in the discussion. This is something that’s been suggested in the past with previous technology transitions. When we were working on policies around nanotechnology for instance, the question of a nanotech regulatory agency came up more than once.
The reason why the US doesn’t typically have technology-specific regulatory agencies is that defining what’s in and out of scope is fiendishly difficult; they are almost always too narrowly focused and thus end up missing critical issues; they encourage a proliferation of agencies around other technologies; they emphasize the technology rather than the potential impacts; and they muddy the waters around existing regulatory jurisdictions.
I suspect that the same arguments will play out here and we won’t see an AI regulatory agency emerge any time soon in the US.
However, this discussion did underscore the need for something at the Federal level that supports agile, informed, and impactful responses to emerging advanced technological capabilities in ways that transcend current structures, and that promotes beneficial development while placing guardrails around potentially harmful innovation. Without this, it’s hard to imagine how new technologies will continue to be developed and guided in societally and economically responsible and beneficial ways.
Here, AI is simply too narrow and contested to be the sole focus of a new initiative. Despite its transformative power, it’s only one of a growing number of emerging technologies (and combinations of technologies) that will transform the world over the coming decades.
Rather, the US government needs to take seriously the idea of a cross-agency initiative that enables pathways to successfully navigating advanced technology transitions writ large, and that can respond with agility to emerging trends such as large language models as the need arises.
Here, a successful initiative would have the power to work with other agencies to develop foresight and benefit/risk assessment capabilities, to foster public private partnerships that inform effective decision making, to encourage innovative approaches to multi-sector and multi-stakeholder governance, to enable meaningful engagement with stakeholders across society, and to work across the federal government on technology-related policies, regulation, and policy implementation, including taking responsibility for new regulations as and when the need arises.
Focusing on advanced technologies rather than AI alone would build much-needed agility and responsiveness into the federal government. Here, an agency with a broad remit around ensuring beneficial, safe and secure technology transitions could be created with the ability to address multiple emerging issues — including AI and LLMs, but also extending to areas like quantum technologies, brain machine interfaces, nanotechnology, the next generation of DNA-based technologies, and beyond.
Public private partnerships
Public private partnerships were a “dog in the night” issue in the hearing, in that discussion of them was largely notable by its absence.
Maybe future hearings will address the need to develop partnerships between government, industry, academia, civil society, and beyond, as approaches to AI oversight are explored and developed. But the reality is that there need to be partnerships, collaborations, and cross-sector/cross-disciplinary cooperation as soon as possible if we’re going to get AI right.
This is where leveraging a wealth of expertise and experience around navigating complex technology transitions in the public sector (including universities) is critically important. This is expertise that’s been honed in many cases by working through previous transformative technologies, and draws on people who have a lot to bring to the table around AI if they are only given the chance.
More broadly, in a democratic society where decisions both impact large numbers of people and are impacted by them, progress is near-impossible without the trust, insights, and cooperation, that comes from trans-sector engagement and partnerships. This is not something that can be introduced once industry has laid the groundwork — that’s a sure-fire way to lose trust. Rather, public private partnerships and widespread engagement around getting AI right needs to happen now if there’s even a hope of developing the frameworks, processes, and buy-in, that will be necessary to see the benefits of AI without hitting a lot of brick walls along the way.
The bottom line here is that I was encouraged by Tuesday’s hearing. But there’s a long way to go before we seriously begin to get our heads around how to get AI right. And one of the first steps as we navigate this particular advanced technology transition is to make sure that we’re not shooting ourselves in the foot because we aren’t engaging widely and leveraging expertise and perspectives that might otherwise be overlooked.
And while some experts may argue AI is far too important to waste time listening to others, the one big lesson from previous advanced technologies is that if we don’t listen and engage early and often, we’ll come to regret it.
(A quick addendum — the one theme I didn’t touch on here but was present in the hearing is the potential threat to democracy and the democratic process from AI. This is going to become a substantial issue as the next US election approaches, and so expect more on this in the coming months)