Supercharging Research Using AI
A new report from the US President's Council of Advisors on Science and Technology indicates we are on the cusp of an AI-driven transformation in how we do science
Could artificial intelligence revolutionize how we do science? A new report from US President’s Council of Advisors on Science and Technology (PCAST) argues that it could and it should — as long as new capabilities are developed and used responsibly.
The report — Supercharging Research: Harnessing Artificial Intelligence to Meet Global Challenges — is essential reading for anyone looking to leverage AI in pushing the limits of scientific research and discovery.
I’ve been having conversations with colleagues for well over a year now around how emerging AI capabilities have the potential to transform research — many of them kickstarted by the growing capabilities of generative AI.
Of course, the idea that AI could help in making new discoveries isn’t a new one. For the past few years there’s been growing interest in how artificial intelligence can potential help reveal knowledge that’s beyond the reach of human intelligence alone. As far back as January 2016 Deep Mind co-founder Demis Hassabis is on the record as saying deep learning — a key technology behind current advances in AI — could “accelerate scientific research” and “even suggest a way forward that might point the human expert to a breakthrough.”
Hassabis was talking about Deep Mind’s AI AlphaGo, which went on to beat leading Go professional Lee Sedol a couple of months later with a move that has been described as one that “no human would make.”
That was eight years ago. Fast forward to the present and AI is increasingly being seen as a technology that can lead to leaps in understanding in science — but at a scale that is orders of magnitude greater than the “novel insight” that was achieved by AlphaGo.
The new PCAST report reflects this, and makes a compelling case for AI having the potential to revolutionize every aspect of science as it accelerates the rate of knowledge creation and opens up the possibility of new discoveries that human scientists are incapable of making on their own.
Not surprisingly, the report acknowledges the importance of generative AI. But it goes much further than this as it considers the possibilities that are nascent in emerging AI foundation models. These, it claims, could lead to profound breakthroughs in areas that include materials discovery; advanced semiconductors; understanding and addressing climate change and extreme weather events; revealing the underlying physics of the universe; fundamental understanding of biology; disease diagnosis and treatment; and the behaviors of humans, organizations, and institutions.
The report’s authors argue that the possibilities here represent a profound step-change in how we do science and how we leverage new discoveries and knowledge for good. They also make it very clear that there is a national imperative for the US to be at the forefront of AI-enabled research and discovery.
But with this there also comes a strong charge to ensure that AI in science is developed and used responsibly.
As the report articulates in Recommendation 4 (“Adopt principles of responsible, transparent, and trustworthy AI use throughout all stages of the scientific research process”):
“Managing the risks of inaccurate, biased, harmful, or non-replicable findings from scientific uses of AI should be planned from the initial stages of a research project rather than performed as an afterthought. PCAST recommends that federal funding agencies consider updating their responsible conduct of research guidelines to require plans for responsible AI use from researchers. These plans should include recommended best practices from agency offices and committees that address potential AI-related risks and describe the supervision procedures for use of any automated process. To minimize additional administrative burden on researchers and build a culture of responsibility, after enumerating major risks, agencies should provide model processes for risk mitigation.”
Overall the report lists five recommendations that, in addition to responsible research, cover access to AI resources and federal datasets, public-private research collaborations and partnerships, and innovative approaches to integrating AI into the process of research and discovery.
These form a concrete set of recommendations for supporting and growing the transformative use of AI in research. But the deeper importance of the report is seated in the sheer audacity of PCAST’s vision of an AI “supercharged” future of scientific discovery.
It’s this that makes the report essential reading for anyone involved in the process of pushing the bounds of knowledge and understanding — whether in the senior echelons of universities, research organizations, and government agencies; or at the metaphorical coal face of scientific discovery. Infused through the report is the message that, done right, AI could be a game changer in how we make new discoveries, what we discover, and how we use these discoveries.
Of course, this is wrapped up in caveats around the very real limitations of AI capabilities as they currently stand. But even with current and emerging AI technologies — limited as they are — there are growing indications that artificial intelligence could fundamentally change the way that we extend our knowledge of the world we live in, and how we use this to transform the future.
The full report can be accessed here.
It sure is going to change things! I just posted a newsletter about how we as humans need to leverage our greatest asset for science (and tackling grand challenges like climate change and biodiversity loss): our ability to think big, creative thoughts. AI is already beginning to help with the specialist skills like programming. This will only continue to strengthen. We need to focus on our ability to do wide-ranging, creative, and big-picture thinking, and connect up diverse ideas. The more things shift to big picture strategy, the more humans have to add.