$10 million for AI safety research
The recently-established Frontier Models Forum has just announced a new $10 million AI Safety Fund — but will the projects it supports be as radical as the technological breakthroughs they address?
The Frontier Models Forum — an industry body established to promote the safe and responsible development of frontier AI systems — has just announced an initial pot of over $10 million to promote research in the field of AI safety. The fund will be used to support independent researchers from around the world who are affiliated with academic institutions, research institutions, and startups.
The announcement comes hot on the heels of a call for new approaches to managing the risks of AI co-authored by such luminaries as Yoshua Bengio, Jeffrey Hinton, Stuart Russell, and even Yuval Noah Harari and Daniel Kahneman.
This new funding is an important opportunity to catalyze innovative thinking, transdisciplinary research, and forward-looking stakeholder engagement around positive AI technology transitions. And hopefully there’ll be plenty of creative ideas that emerge once the call for proposals goes out.
At the same time, there’s a very real danger of the Frontier Model Forum attempting to address unconventional challenges through rather conventional thinking — especially if they don’t look beyond the blinkers of their own field.
Many of the current wave of AI capabilities are built around models that are trained on massive amounts of data and that are incredibly versatile in how they are used — the GPT models underlying ChatGPT for instance, together with other large language models, are good examples of this. These models are increasingly being referred to as “foundation models” as they are so adaptable.
However, there have been growing concerns that some foundation models could become so powerful that they begin to exhibit potentially dangerous capabilities and behaviors. A recent paper defined these as “frontier models” and the term has since been getting increasing traction.
The Frontier Model Forum was explicitly established to advance AI safety in the light of such models. Founded by OpenAI, Google, Microsoft, and Anthropic, the forum sets out to pull together expertise from multiple sectors to support the safe and responsible development of powerful AI. Included in this is a commitment to collaborate with policymakers, academics, and civil society.
This is good news. As my colleague Sean Dudley and I recently wrote in the journal Nature Nanotechnology, “With mounting concern around the disruptive nature of [large language models] and the possibility of developing artificial general intelligence, many conversations are still being driven by technological experts with little understanding of the complexity of the social, economic and political landscape that societally beneficial and responsible AI faces.”
The Frontier Model Forum — and presumably the funding coming out of it — will hopefully make substantive strides to closing this understanding gap.
Yet I do worry that projects funded by the Forum will end up being bound by conventional thinking if care isn’t taken — including projects that prioritize technical expertise, build on outmoded models of risk management, or projects that don’t embrace creative thinking that is unfettered by disciplinary norms.
Of course there’s a place for small steps that push at the edges of current understanding. Yet the bold and transformative developments around foundation and frontier models demand equally bold, audacious and transformative thinking and ideas around safety and responsibility.
This is where there is a growing need for new thinking and research around advanced technology transitions that far transcends any conventional ideas of progress that are bound by linear thinking, conventional disciplines, or established areas of expertise.
For instance, it would be encouraging to see initiatives supported by the Forum that explore novel conceptualizations of risk; that draw on inspiration from the nexus between the arts, humanities and emerging capabilities; that radically reimagine public participation and engagement in responsible innovation; that innovate around blended models of thought leadership, new understanding, and knowledge mobilization; and that recognize the transformative possibilities of ideas that fly in the face of conventional thinking.
I hope we see a portfolio emerging that captures ideas like these and a lot more. But of course, for this to even have a glimmer of a chance of coming about, it will need to start with the submitted proposals — and proposers having the imagination to submit ideas as radically transformative as the technologies they are hoping to steer toward helping create a better future.
Here’s hoping this will be the case, and that this opportunity won’t be yet another attempt to be radically transformative in very conventional ways.