Universities need to be investing in responsible AI now more than ever
As universities scramble to take advantage of ChatGPT and other LLMs, they cannot afford to take their eye off the ball of responsible and ethical AI
It’s been incredible to see the speed with which ChatGPT, together with an emerging array of Large Language Models (LLMs) and associated tools, have shaken up universities.
Just a few weeks ago it seems, the conversation about ChatGPT was around how to prevent plagiarism in class. Now it’s rapidly shifting to how to leverage advances in AI across academia in order to keep up with the competition.
As a result, this is turning out to be one almighty whiplash of a technology transition in academic institutions—and for good reason. Emerging AI capabilities have the potential to transform almost everything we do as academics.
LLMs are now being seen as potential tools for accelerating discovery, stimulating creativity, catalyzing collaboration, building research teams, elevating and accelerating grant writing, vastly increasing administrative efficiency, and transforming learning. And this is just the tip of the transformational AI iceberg.
The academic catch-up love affair with AI is being fueled in part by a growing fear that early-adopter institutions will have a substantial first mover advantage.
The motivation here is that, where the technology has the ability to vastly improve the quality of grant proposals for instance, or accelerate the speed with which innovative research translates into high impact papers, or even leads to deeply persuasive fundraising drives, universities that are not at the cutting edge of using it will find themselves falling behind.
And with the current pace of innovation, even being a month or so behind the pack could have profound consequences to the longer term funding, branding, impact, and ranking.
As an example, successful grant proposals are increasingly going to be those that have been developed using LLMs.
Used smartly, LLMs can ensure proposals precisely match the requirements of funding calls while using compellingly clearer and persuasive—all while slashing the time between good ideas and expertly crafted submissions.
But the potential use of LLMs extends far beyond than this. Well-designed LLMs have the potential to help researchers quickly sift through innovative ideas to hone their focus on what’s more likely to resonate with funders and reviewers. And in-house LLMs have the ability to uniquely capture institutional capacity and memory, and distill these down in ways that few individuals or institutional mechanisms can currently achieve.
As a result, using a well-crafted LLM-based system to convene teams that combine expertise in compelling and original ways, and to catalyze collaboration in research and fundraising, is likely to become increasingly common.
Similarly, imagine all members of an academic institution having access to an LLM that has been pre-trained on a vast database of organizational documents, notes, publications, articles, CVs, and more. When combined with a natural language interface, such an LLM would become a transformative catalyst for collaboration, synergistic connections, and creative research and scholarship—and one that blows existing human-based systems out of the water.
It’s the sort of technological tool that would give any institution that implemented it ahead of others a massive competitive advantage.
Capabilities like these are simply too seductive and important for universities not to be embracing them at speed. But as they do, there’s a danger that critical issues around ethical and responsible development are sidelined in the rush for academic AI supremacy.
The problem here is not one of intentionally sidestepping broader societal challenges, but one of not taking them seriously enough in the race to be at the forefront of the AI revolution.
Outside of academic institutions, developers of LLMs, chatbots, and other AI-driven innovations, are being faced with an increasingly complex landscape around the risks, benefits, and governance, of their creations. And these concerns are just as relevant to development and implementation within universities as they are to private and non-profit organizations.
It’s concerns around what we don’t know about how AI will transform the future that led to the recent letter calling for a pause on developing systems more powerful than GPT4, and that are forcing a lot of organizations to scramble internally as they balance the need to stay at the head of the AI pack while being socially responsible.
These are well-founded concerns. They are not typically based on science fiction speculation or cynical fear mongering, as some have suggested. Rather, they reflect the speed with which a potentially transformative technology is emerging faster than our ability to understand and govern the possible social and economic consequences.
They are also concerns that are buoyed along by the ease and rapidity with which people are embracing these new technologies—often without a thought as to the future implications, or with a naively dismissive attitude toward them and an assumption that all will be well in the end because it always is (although it’s also worth remembering who gets to write the history of technological successes …).
This, though, is how transformative technologies creep under our collective societal skin and lead to adverse social and economic impacts—not through Hollywood-like doomsday scenarios, but through an eagerness to adopt and adapt them before everyone else does, coupled with an assumption that any potential issues will be handled by someone else.
Within this mix, academic institutions are poised to become a major player. As well as having the capacity to develop cutting edge AI systems, they are also substantial influencers and catalysts of innovation adoption and diffusion.
This is is an important role for universities, and one that shouldn’t be downplayed. But there’s also a responsibility that comes with this power to adopt, adapt, and be an agent of AI change in the world. And this responsibility includes asking deep questions about the social and ethical implications of the systems we build, the ways in which they are used, and the impacts they potentially have on communities beyond our own.
As a result, as academic institutions rush to leverage AI, they should also be at the forefront of public discourse around responsible AI. And this should not be decoupled from internal implementation, but should be deeply intertwined with it.
There’s an old adage that universities are very good at telling others what to do, but not so good at taking their own advice—especially when it comes to societally responsible innovation.
We don’t have this luxury with AI — not that we ever did, but the stakes are even higher now. Which means that, as universities scramble to use cutting edge AI in research, administration, learning, societal impact, and a whole host of other areas, they need to be simultaneously investing in effective AI governance and its socially responsible and ethical development and use. And they need to be working proactively with others faced with the same challenges.
Not to do this makes them just one more player in the tsunami of AI innovation that assumes that responsible innovation is important, but that is somebody else’s problem.
You write...
"....(AI concerns) are not typically based on science fiction speculation or cynical fear mongering, as some have suggested. Rather, they reflect the speed with which a potentially transformative technology is emerging faster than our ability to understand and govern the possible social and economic consequences."
Yes, good point. Now let's keep going with that.
AI will further accelerate the knowledge explosion, just like computers and the Internet have. Knowledge development feeds back upon itself, leading to an accelerating rate of knowledge development. Thus, if we're already unable to "understand and govern the possible social and economic consequences" that is only going to get worse over time, at a faster and faster pace.
We don't need academia to jump on the AI bandwagon like everybody else. We need intellectuals to be asking big picture questions like this...
What are the compelling benefits of AI which justify taking on even more risk at a time when we already face so many? Is the marriage between violent men and an accelerating knowledge explosion sustainable?
More to the point, we need academics focused on the knowledge explosion itself, the source of the accelerating pace of disruption. Some disruption is good, more and more, faster and faster without limit is not. So, how do we take control of the pace of the knowledge explosion so that it proceeds at a pace which society at large can constructively adapt to?
At the heart of such challenges is a simplistic, outdated, and increasingly dangerous "more is better" relationship with knowledge left over from the 19th century. if academics won't examine and challenge this outdated knowledge philosophy, who will?
https://www.tannytalk.com/p/our-relationship-with-knowledge