There was a video clip from Meet The Press doing the rounds this weekend with former Google CEO Eric Schmidt claiming that AI is just too complex for anyone outside the industry — and especially in the government — to “get it right” when it comes to the first wave of effective governance and regulation.
https://twitter.com/MeetThePress/status/1657778656867909633
I get where Schmidt is coming from (you can read the transcript of the clip below) — AI is complex, and it’s hard for policy makers and others to wrap their heads around what’s coming down the pike and how to steer it in a societally beneficial direction. And of course he probably still has traumatic after-images of congress discussing social media or debating TikTok seared into his mental retina.
And yet what he misses — and he is far from alone in this — is that the emergence of beneficial and responsible AI also depends on having people at the table who have expertise in areas that many industry experts are most definitely not au fait with — including sophisticated approaches to the governance of emerging technologies.
This “leave it to the technical experts” mentality is something I’ve bumped into time and time again in my work around emerging technologies — and it never plays out well.
It certainly didn’t with genetically modified organisms at the tail end of the twentieth century where Monsanto’s “it’s complicated, leave it to us” approach backfired spectacularly. It’s an attitude that risked endangering the early development of nanotechnology. And even as recently as a few months ago I heard the same sentiments expressed about quantum technologies on a panel at the American Association for the Advancement of Science annual meeting. There, the argument was that the technology is too important to be held back by discussions with “non experts” about what might go wrong!
The nanotechnology case is an interesting one as, interestingly, we did dodge a bullet there. And we did this by engaging early and often across a very broad range of domains and expertise. But it was touch and go at some points.
I remember briefing the President’s Council of Advisors on Science and Technology some years back when one of the Council members exclaimed in frustration “you can’t expect members of the public to make decisions on technologies that they don’t understand and we do!”
I’m paraphrasing, but this was the gist of their comments. And it reflects their sheer exasperation at the thought that anyone other than leading scientists could possibly know enough about nanotechnology to have an informed opinion on potential adverse impacts or unintended consequences.
Sound familiar?
The difference between nanotech and AI is that enough people recognized early on the need for broad and deep engagement to ensure the development and governance of safe and beneficial nanotechnologies. In effect, it became a team effort.
Part of the drive behind this was the realization that, without building trust across key stakeholders (including policy makers, civil society, and members of the public), and without addressing potential social, economic environmental and human health risks early on, the social and economic benefits of nanotechnology could well slip from our grasp.
There was also a recognition that, when developing a technology that has the power to impact everyone in profound ways, everyone has the right to play some role its development and use — and that “it’s complicated” simply isn’t an excuse for excluding people from the table.
It was the right call for nanotechnology, and it’s the right call for AI.
“It’s complicated” is not an excuse for avoiding engaging with people who have a stake in AI not messing up their lives, and who have expertise that may just extend the range of development and governance options beyond the limited understanding of self-proclaimed experts in the field.
In fairness to Eric Schmidt, I suspect that he has a more nuanced perspective that the one that comes across in the clip above. And even here he’s trying to balance the sheer speed of developments with AI with the paralyzing inertia that sometimes accompanies working with people from different backgrounds, expertise, and levels of understanding (you can see this in the transcript of the clip I’ve included below).
I get this.
Even so, who gets to decide what beneficial and responsible AI looks like over the next few months will determine what comes next — and if this is decided by industry experts who do not have expertise in navigating exceptionally complex technology transitions in a deeply complex society, we have a problem.
Of course, engagement is challenging. It will take patience and humility, and concerted efforts to ensure people from very different backgrounds have access to information that is understandable, relevant, and informative. It will also mean recognizing that people don’t need to understand the inner workings of AI to discuss and explore how it might potentially impact their lives and threaten what’s important to them.
But hard as it is, this level of engagement is essential if we’re to get AI right, rather than failing to learn from the lessons of past technology transitions.
Transcript of clip
For completeness, here’s the transcript of the video clip:
Interviewer: You’ve described the need for guardrails. And, what I’ve heard from you is, we should not put restrictive regulations from the outside, certainly from policy makers who don’t understand it. I have to say, I don’t hear a lot of guardrails around the industry that. It really just … as I’m understanding it from you … comes down to what the industry decides for itself.
Schmidt: When this technology becomes more broadly available, which it will, and very quickly, the problem’s going to get much worse. I would much rather have the current companies define reasonable boundaries.
Interviewer: It shouldn’t be a regulatory framework … it maybe shouldn’t even be a sort of a democratic vote. It should be … the expertise within the industry help to sort that out.
Schmidt: The industry will first do that, because there’s no way a non-industry person can understand what is possible. It’s just too new, too hard, there’s not the expertise. There’s no-one in the government who can get it right. But the industry can roughly get it right. And then the government can put a regulatory structure around it.
AI governance is a mythology used to pacify the public. The point of AI governance is apparently to align AI with human values.
And what are human values?
Turn on the news, and you shall see.