Responsible AI: Lessons from Nanotechnology
20 years ago we were learning how to navigate the risks and benefits of nanotechnology. Two decades on, are we applying those hard-won lessons effectively to artificial intelligence?
At first blush nanotechnology and artificial intelligence may not seem to have that much in common. And yet there are surprising similarities when it comes to avoiding failures in a society where the success of transformative technologies depends on far more than technical knowhow alone.
Despite this, it’s not at all clear that we’re learning from the past as we rush headlong into an AI future.
My colleague Sean Dudley and I explore this further in a new commentary in the journal Nature Nanotechnology and an accompanying article in The Conversation — and conclude that there’s a lot to be learned the transdisciplinary initiatives and broad stakeholder engagement that underpinned nanotechnology.
In the articles we draw on the early days of nanotech development — something I was at the heart of as I co-chaired the interagency Nanotechnology Environmental and Health Implications working group, and later served as science advisor to the highly influential Project on Emerging Nanotechnologies. And we make the case for greater investment in understanding and navigating advanced technology transitions in ways that “bridge disciplines and sectors, and bring together people, communities, and organizations with diverse expertise and perspectives to investigate emerging landscapes and drive toward a more equitable, sustainable, and promise-filled future.”
Plenty of mistakes have been made in the development and use nanotech over the past two decades — but we’ve also learned a lot about the importance of working with experts from the arts, humanities, and social sciences, in addition to those at the forefront of nano-specific science and technology. We’ve also learned that broad stakeholder and public engagement are absolutely critical to success.
As artificial development gathers pace, these lessons don’t seem to be getting through though. Development is still being driven by a small group of experts and companies who believe that they have all the understanding they need. And while there’s a growing urgency around how to ensure the safe and responsible development ofAI, there’s still a reluctance to engage a diversity of voices and perspectives in these conversations.
This is a serious mistake, and one that needs to be corrected as soon as possible. We’ve learned a lot from previous advanced technology transitions like nanotechnology. I’d include the development of technologies like genetically modified organisms here, which was a masterclass in how naivety, hubris, greed, and a lack of broad engagement, can create near-insurmountable roadblocks to progress. In fact early investment in responsible nanotech drew heavily on lessons learned from the GMO debacle.
As AI development continues to accelerate, we cannot afford to get things wrong — if anything, the stakes here are far higher than they were with either nanotechnology or GMOs. But to do this, AI needs to learn from the lessons of the past if it’s to lead to a better future — and fast.
Read more in:
Navigating Advanced Technology Transitions Using Lessons from Nanotechnology. Andrew D. Maynard and Sean M. Dudley. Nature Nanotechnology, October 2, 2023.
Navigating the risks and benefits of AI: Lessons from nanotechnology on ensuring emerging technologies are safe as well as successful. Andrew D. Maynard and Sean M. Dudley. The Conversation, October 2, 2023.
Maynard writes, "As AI development continues to accelerate, we cannot afford to get things wrong — if anything, the stakes here are far higher than they were with either nanotechnology or GMOs. But to do this, AI needs to learn from the lessons of the past if it’s to lead to a better future — and fast."
https://andrewmaynard.substack.com/p/responsible-ai-lessons-from-nanotechnology
The lesson we need to learn is that we're riding a knowledge explosion which generates powers of vast scale faster than we can learn how to safely manage those powers. It's this pattern which is the problem. Nuclear weapons, genetic engineering, artificial intelligence are here today, and more such vast powers we can't even yet imagine are coming as the 21st century continues to unfold. As the number and scale of such vast powers increases, the room for error shrinks, and the odds are increasingly against us.
The most constructive thing we could do is to stop talking about particular threatening technologies one by one by one, and shift our attention to the knowledge explosion itself, the process generating all the powers of vast scale. Focusing on particular threatening technologies is a loser's game, for the simple reason that they are being developed faster than we can learn how to manage them. It's only in learning how to take control of the process generating such powers that there is hope.
A primary obstacle in achieving such a shift of focus to the source of the problem seems to be that a "more is better" relationship with knowledge is a holy dogma of the science community, and it is the science community which holds most of the cultural authority in the modern world.
The science community sincerely wishes to make such emerging powers safe, but they don't seem able or willing to conceive of any approach which challenges the "more is better" philosophy at the heart of their enterprise. And so they keep creating powers of vast scale as fast as they can, and when things go wrong they then blame the problem on the public or politicians or anybody except their own stubborn addiction to an outdated 19th century knowledge philosophy.
Are human beings gods, creatures of unlimited ability? If we answer no, then it should become obvious that any process which generates new powers in an unlimited "more is better" and "faster is better" manner is on a collision course with the reality of the human condition. Whatever the limits of human ability might be, we are racing towards those limits as fast as we can. And because of the vast scale of powers being developed, when we do reach the limits of our ability, catastrophe may explode upon us much faster than we can effectively respond.
I'll begin to to take the science community seriously when I hear them talking about the possibility of pausing the development of all powers of vast scale. Yes, actually doing this would be very challenging. But talking about doing it is not at all difficult, and that's the place to start.
Science is supposed to be built upon respect for evidence. And here's the evidence we should be listening to today.
Nuclear weapons - we have no idea how to make them safe
Genetic engineering - we have no idea how to make this safe
Artificial intelligence - we have no idea how to make this safe
See the pattern?
If the science community is truly loyal to reason, they will acknowledge this pattern and respond to it. And that will require doing less science while we learn how to bring this dangerous pattern to an end. Doing less science for a time doesn't mean we stop learning. To the contrary, it means learning something new, necessary and huge, how to take control of the knowledge explosion.
If you're hearing frustration in my words, that's because I've lost faith in the science community's ability to even consider what's been described above. That virus of distrust will continue to spread throughout our society until the science community comes to terms with the new world it's success has created.