Governing AI for Humanity: A Compelling Roadmap from the United Nations
The recently convened UN High-Level Advisory Body on AI has just released its first interim report. It provides a comprehensive vision for ensuring AI benefits humanity
In October of this year the United Nations convened a High-Level Advisory Body on AI. Just over two months later the Advisory Body, which is made up of a highly diverse group of experts from around the world, has published it’s first interim report on Governing AI for Humanity.
This is, perhaps, one of the most important AI governance analyses to have been published to date. It’s also one that I suspect will have far-reaching consequences for any institution that’s serious about supporting global, inclusive, equitable and sustainable development in an age of AI.
It’s worth noting here that two months to produce a report of this weight and significance is impressive.
This could have been an inconsequential patchwork of poorly informed ideas and cobbled together recommendations that sacrificed quality for speed as it raced to keep up with the pace of AI development. And yet it’s not.
It’s perhaps a testament to wide ranging expertise and abilities of the Advisory Body’s members that the interim report delivers a clear and informed perspective on both the potential benefits and the potential risks of AI in the near to mid term. This is neither a report that gets tangled up in speculative long term AI risks, nor misses the transformative significance of AI breakthroughs. And buoyed along by a membership that understands agile and emerging governance at scale, it delivers.
For anyone working in AI, any institution that’s working at the intersection of emerging technologies and enabling positive futures (including governments, universities, think tanks, and civil society organizations), and any business that’s developing the next generation of AI tools, products, and capabilities, this interim report is a must-read. Not only does it signal where global governance around AI is heading — important in itself for anyone who’s work is associated with or impacted by governance frameworks — but it begins to flesh out principles that are likely to be increasingly important in guiding AI for good.
Building on the Governance of Emerging Technologies
Of course, the interim report doesn’t arise out of a vacuum. There’s been a groundswell in thinking and action around AI governance over this past year that’s underpinned a number of initiatives, including the current version of the EU AI Act and the US Executive Order on Safe, Secure, and Trustworthy AI. Yet the UN’s interim report on Governing AI for Humanity reflects thinking around the global governance of emerging technologies that stretches back far further than this.
Back in 2008 I was a participant in the World Economic Forum’s first Summit of the Global Agenda Councils in Dubai. Within the Council I was working on, we began developing the idea of a Global Institute on Emerging Technology Policy — recognizing that emerging technologies were raising governance challenges that had global reach and needed a global response.
Central to that idea was the recognition that “[n]ew and innovative policies are needed at the international, national, corporate and institutional level to foster and manage the effective and sustainable development of emerging technologies.” (This from my 2008 notes on the concept).
Working with my colleague Tim Harper and other Council members, this eventually led to a proposal for a Global Centre for Emerging Technology Intelligence that I presented at Davos in 2010.
That proposal predated the World Economic Forum’s focus on the Fourth Industrial Revolution and the subsequent establishment of the WEF Global Future Council on Agile Governance (which I was also a member of). There’s no concrete evidence of causal links between these and the 2010 proposal, apart from my continued input. However, the ideas here, and subsequent work on agile governance and emerging technologies across many different organizations, set the scene for the UN Interim Report.
Back in 2010 we wrote in our proposal for the Global Center for Emerging Technology Intelligence:
“[T]here is growing awareness that a new paradigm is needed if the technologies are to be developed effectively—one that predicts and avoids potential hurdles, develops and implements new technologies in partnership with multiple stakeholders, identifies and addresses possible health and environmental impacts before they occur, and responds rapidly to new developments. Yet there is a gaping chasm between the knowledge that a different approach to policy-making is needed, and an understanding of what this new approach should look like.”
Eight years later, our work on the WEF Global Future Council on Agile Governance led to recommendations on reimagining policy making in the Fourth Industrial Revolution, starting with the statement that:
“The complex, transformative and distributed nature of
the Fourth Industrial Revolution demands a new type of governance to address the interlinked dynamics of the pace and synergistic nature of emerging technologies; the transnational impact of technologies and broader societal implications; and the political nature of technologies.”
In this report we defined agile governance as “adaptive, human-centred, inclusive and sustainable policy-making” — a definition that aptly describes the thrust and recommendations of the UN Interim Report.
Both of these reports were part of a rich lineage of thinking around global governance of emerging technologies that stretches back well over a decade. Since our work on agile governance (and there are many other players in this field beyond the WEF Global Future Councils — although many of them have also worked with the Councils), there’s been growing attention paid to the governance of emerging technologies — not only through the latest iterations of the WEF Global Future Councils (including the Global Future Council on the Future of Technology Policy), but many other initiatives that recognize a growing disconnect between the capabilities of powerful platform technologies like AI and effective governance mechanisms. And not surprisingly, a good number of people on the UN High—Level Advisory Body on AI have been integral to these initiatives.
As a result, much of the interim report reflects emerging thinking around the governance of emerging technologies. Yet it does this in a way that is highly responsive to the specific challenges and opportunities presented by AI.
Top Takeaways from the Interim Report
Here, it’s worth highlighting a number of takeaways I personally had from reading the interim report — with the proviso that these just touch the surface of its scope and relevance:
Centering inclusive, equitable and sustainable human futures
While the interim report is focused on governance, the driving principles behind it are very clearly centered on positive futures for humanity. Here, there’s a strong emphasis on using AI to support inclusive, equitable and sustainable development, while placing governance guardrails in place to avoid harm.
Appropriately, the interim report is grounded in the UN Charter and the Universal Declaration of Human Rights — both of which place the rights and flourishing of individuals and global society at the core of a set of guiding principles.
In this way, the interim report is as relevant to innovation and global development as it is to technology governance.
Ensuring global access and benefits — especially in the Global South
Building on this emphasis on human rights, the interim report calls out growing concerns over an “AI divide” between communities that have differential access to AI tools and capabilities — especially where this exacerbates inequality between the Global North and Global South.
Here, the report’s authors note that an estimated 2.6 billion people — over 30% of the world’s population — lack access to the internet and, as a consequence, access to the potential benefits of AI.
Addressing potential disparities here will require global efforts — and global governance — to ensure that purely market driven AI development doesn’t, by default, favor privileged communities.
As the report’s authors write:
“Ensuring that AI is deployed for the common good, and that its benefits are distributed equitably, will require governmental and intergovernmental action with innovative ways to incentivize participation from private sector, academia and civil society.”
Using AI to address climate change
While the interim report is extremely broad in its scope, it calls out climate change as a specific area demanding urgent action, and one where AI may provide novel solutions to complex challenges.
Climate change is used as a case study for how AI might aid in addressing global challenges that represent deeply complex intertwining between social, technological and environmental factors. However, it’s clear that the report writers see the juncture between AI and climate as absolutely crucial to making progress on navigating the impacts of human action-driven climate change.
AI and the Sustainable Development Goals
Similarly, the report emphasizes the potential power of emerging AI capabilities to accelerate progress toward the SDGs. This is something that I wrote about back in September, and it’s good to see this intersection coming to the fore here.
How precisely AI might support progress toward addressing the SDGs is still something that isn’t well understood. However, it’s becoming clear that any organization that is focused on the SDGs or areas that touch on them — whether sustainability, global development, planetary health, or many others — yet is not also focusing on AI, is likely to become increasingly irrelevant in the emerging AI future.
This is particularly apparent in the report’s recommendations on institutional functions. These are a set of recommendations for activities, approaches and policies that institutions should consider implementing to support effective global governance of AI. Here, the SDGs are explicitly addressed in two of the seven functions in the report, and implicit in many others.
In particular, it’s worth highlighting that Institutional Function 5 calls for “AI-enabled public goods for the SDGs”.
Public Interest Technology and AI for social good
Over the past few years there’s been growing interest in the idea of “Public Interest Technology” — the study and application of technology expertise to advance the public interest in a way that generates public benefits and promotes the public good. The Public Interest Technology University Network, and even ASU’s Masters degree in Public Interest Technology, are a direct result of this.
As AI has continued to gain momentum, there’s been increasing attention paid to how it might be incorporated into approaches to Public Interest Technology — and the UN report reflects this.
In particular, the report notes that:
“AI may drive progress in areas where market forces alone have traditionally failed. These range from extreme weather forecasting and monitoring biodiversity, to expanding educational opportunities or access to quality healthcare, and optimizing energy systems. Governments and the public sector can improve services for citizens and strengthen delivery for vulnerable communities by leveraging AI for social good.”
This emphasis on the use of AI for social good rather than simply for financial gain is crucial to building a better future, despite the prevalence of assumptions amongst some AI developers that the market around transformative and disruptive AI will automatically deliver on social good.
While the report doesn’t discount the value or market driven innovation, it recognizes that markets alone do not and cannot address many of the complex challenges facing humanity. Here, there is a critical role for emerging AI capabilities to be used by governments and other organizations in the public interest — and in developing governance principles and processes that support this.
Multi-stakeholder collaboration
Those of you who are familiar with my writing will know that I’m an ardent advocate for multi stakeholder engagement around advanced technology transitions. And so I was very pleased to see this emphasized in the Advisory Body’s interim report.
This emphasis on including a diverse array of stakeholders in global AI governance is infused through the interim report. It’s worth specifically calling out the recommendations on Institutional Function 4, which explicitly call for the facilitation of “development, deployment, and use of AI for economic and societal benefit through international multi-stakeholder cooperation.”
This, of course, meshes with the previously mentioned emphasis on Public Interest Technology — not least in the reality that you cannot develop AI for social good or in the public interest without stakeholders who represent these interests being at the table.
Agile AI governance
I’ve already touched on the landscape around agile governance within which the UN report was crafted, but it’s worth highlighting the interim report’s recognition for the need for more agile approaches to global governance.
As the report’s authors write:
“We interpret [our] mandate not merely as considering how to govern AI today, but also how to prepare our governance institutions for an environment in which the pace of change is only going to increase. AI governance must therefore reflect qualities of the technology itself and its rapidly evolving uses — agile, networked, flexible — as well as being empowering and inclusive, for the benefit of all humanity.”
Guiding Principles for governing AI for humanity
Finally, it’s worth highlighting the five guiding principles for governing AI for humanity in the report’s preliminary recommendations.
These are just a start — and may change as the work of the Advisory Body continues — but they are a solid beginning to approaching effective AI governance:
AI should be governed inclusively, by and for the benefit of all
AI must be governed in the public interest
AI governance should be built in step with data governance and the promotion of data commons
AI governance must be universal, networked and rooted in adaptive multi- stakeholder collaboration
AI governance should be anchored in the UN Charter, International Human Rights Law, and other agreed international commitments such as the Sustainable Development Goals
A Final Word
The interim report from the UN High-Level Advisory Body on AI isn’t the de facto word on global AI governance, and it was never meant to be. Nevertheless, it sets the pace for global dialogue around how to ensure increasingly transformative AI capabilities benefit humanity as a whole, and is an important step toward getting AI right.
Even more importantly, it shines a spotlight on the growing reality that AI for good cannot simply be left in the hands of tech entrepreneurs and AI experts, but is something that everyone with a stake in the future of humanity needs to be a part of.