The Year that Generative AI Changed the World
12 months on from its release, the post-ChatGPT world is very different from what came before it. But we still have a lot to learn about building a positive AI future
It’s hard to believe that, just twelve short months ago, most people hadn’t heard of generative AI, and were blissfully unaware of the artificial intelligence tsunami that was about to engulf them.
OpenAI’s public launch of ChatGPT on November 30th 2022 altered all of that. And in the process, it changed the world.
Yet it wasn’t so much the technology as the social and technological landscape surrounding it that led to the quite profound AI transition we’ve experienced over the past year.
Generative pre-trained transformers, or GPTs, had been around for a while before OpenAI released ChatGPT. They garnered a lot of attention within the AI community, and I had plenty of students and colleagues who tinkered with and were impressed with their emerging capabilities.
Yet GPTs were very much a niche technology.
The release of ChatGPT did two things to change this. It made the technology exceptionally accessible to anyone with a web browser and an internet connection. And it demonstrated — with meme-like virality — just how sophisticated and human-like natural language interfaces were becoming.
Even with the initial release, I experienced many jaw-dropping moments (quite literally) as I demonstrated ChatGPT to students and colleagues. The impact of seeing what the technology was capable of was visceral.
And then, of course, the release of GPT4 took things to another level.
The public success of ChatGPT paved the way for other models and companies to enter the fray at a speed and scale that was mind boggling. Meta’s LlaMA, Google’s BARD and Anthropic’s Claude, are just three examples from an exploding scene around large language models that further boosted an increasing tumultuous landscape.
Add in the equally rapid growth and accessibility of AI-based image, video, and audio generators, and 2023 became the year in which generative AI unequivocally changed the world.
Just to emphasize this, over this past year:
A whole new way of interacting with tech has emerged, that more closely mimics human to human interactions.
We’ve gone from a society where hardly anyone uses generative AI, to a significant number of people using it in some way.
Students the world over have discovered innovative ways of completing assignments faster and better than they could on their own — with a little help from AI.
Established norms and approaches to teaching in higher education have been shaken to the core.
Unprecedented opportunities for AI-assisted learning at scale have emerged.
Researchers have been introduced to transformative new ways that AI can augment and accelerate discovery.
People who think for a living have begun to wonder if AI is going to replace them.
Institutions with no previous interest in AI have established artificial intelligence steering committees, advisory groups, visions, strategies, and action plans.
AI experts have become overnight celebrities.
Politicians and world leaders have met (repeatedly) with AI entrepreneurs.
National policies and regulations around AI have gone from near nothing to full blown pre-legislative discussion and drafting.
International governance around AI has equally been kicked into action.
There’s been a dramatic shift from talking about AI ethics to addressing AI risks.
There have been new and rapidly growing concerns around AI-driven threats to democracy and social stability.
Chatter around the exponential risks of AI has gone mainstream.
Responsible AI has become a thing.
Half the world seem to have become AI experts overnight (at least that’s how it sometimes feels).
There’s been an acceleration in developing AI foundation models that are generalizable to different uses.
There’s been growing discussion of the possibility, and even the possible near term emergence, of artificial general intelligence.
And corporate politics at OpenAI have become more compelling than reality TV.
From the perspective of observing and living through a unique advanced technology transition, it’s been a wild ride! And yet it’s a ride that’s as much to do with social, economic, and political dynamics and behaviors, as it is the technology itself.
This is important, as it underlines just how relevant the relationships between technology and society are in determining the trajectory of something like AI.
Of course, emerging AI capabilities are profoundly transformative in themselves. It would be foolish to underestimate the emergent properties of emerging foundation models, or how these in turn are stimulating new and potentially more disruptive advances.
Yet as this past year has shown, AI is part of a highly complex ecosystem of people, communities, organizations, and governments, together with the social, political, and economic threads that bind them together. And this recognition is critically important to the future of AI.
Perhaps the most obvious lesson from this past year is that our AI future cannot be left solely to AI experts.
Of course expertise in artificial intelligence is essential to the technology’s development and to understanding its capabilities. But in order to navigate a positive AI transition in a world made up of people and politics, expertise is also needed in these domains.
I’d go a step further here and state quite categorically that much broader scientific and technical expertise also needs to be an integral part of the AI technology transition.
As we develop technologies that are designed to emulate human-like intelligence and that potentially lead to machines that have their own version of intelligence — and possibly consciousness and self-awareness along with it — we need to be engaging deeply and widely with experts in intelligence (irrespective of whether it’s biological or non-biological), consciousness, neuroscience, philosophy, the nature of personhood, responsible innovation, ethics, and much more.
In other words, AI can no longer just be about coding, developing novel architectures, or constructing ever-more powerful compute capabilities. Or even experimenting with “suck it and see” technologies where we’re not quite sure what they’ll do until we try.
Of course, there’s nothing wrong with technology development that’s based on trial and error — it’s how innovation’s worked for most of human history. But in an increasingly complex and bounded system such as the one we find ourselves in on this planet, nonlinear and unpredictable associations between cause and effect mean that there’s an increasing likelihood of naive experimentation pushing us past hard-to-spot tipping points.
The introduction of ChatGPT was one such tipping point. The emergence of unexpected behaviors in large language models was another I suspect. Both led to changes that cannot be undone, nor forgotten.
As with all such tipping points, the consequences have been a mix of good and bad — although at this point I think many people would agree that, on balance, we’re collectively in a better place than we were a year ago, even though this couldn’t have been predicted with any degree of certainty.
But we’re still riding this initial wave of the AI technology transition. Where things go next is anyone’s guess.
I’d probably put my money on a cooling off of the hype and a consolidation of current capabilities over the next few months, with no AGI emerging in the near future.
But the thing with complex systems is that they have a nasty habit of throwing up the unexpected. And this is where there’s a fast-growing need for both agility in how we respond to emerging AI challenges and opportunities, and resilience in how we react to the unexpected — personally, corporately, locally, nationally, and globally.
This is where, as the past year has shown us, we need to be far more creative and innovative in how we plan for and navigate the AI technology transition — including how we draw on expertise that goes far beyond the science and technology of AI.
Looking to the coming twelve months, it’s going to be interesting to see how much we’ve learned from this past year — and a year where generative AI really did change the world.
Wonderful list of how AI has impacted us. I agree that the impact of AI on higher education has "shaken it to its core." Not sure all that shaking has actually made a meaningful impact on how teachers teach and how they assess learning. (yet)
The AI is still narrow and weak but has completely demonstrated the risks, not from the AI itself, but from how humans use and react to the AI. We really need to learn what makes us human first and that's a topic that I think has the biggest risk. Not the AI... Human reaction.
https://www.polymathicbeing.com/p/the-biggest-threat-from-ai-is-us