
Rethinking Higher Education in the Age of AI
How embracing playful experimentation can redefine universities in an era of limitless intelligence
A couple of days ago the AI for U podcast posted a conversation between myself and host Brian Piper on AI and higher education. It was a wide ranging conversation, and one that touched on a number of ideas that I’ve been working on.
I’d intended to write about our conversation in my last post. But as anyone who’s read it will know, I got sidetracked by writing about AI and learning/education in terms of playpens and playgrounds.
Now that’s out of my system though (at least for the moment) I wanted to get back to the conversation I had with Brian — especially as we touched on a number of topics beyond the playpen/playground analogy that are relevant to broader thinking about how artificial intelligence likely to profoundly impact how we approach education and learning within universities.
I would strongly recommend listening to the full conversation, which is available on Enrollify as well as Apple Podcasts, Spotify, and YouTube. But I’m also acutely aware that most people who read articles like this have an aversion to clicking on podcast links.
And so I thought I’d try something different — for me at least — and use ChatGPT capture the essence of the conversation and write an article about it in my style and voice.1
As I often find with generative AI experiments like this, I’m not sure how comfortable I feel about asking a machine to write on my behalf as so much of my professional and academic identity is embedded in my writing. And yet, as a way of extracting and synthesizing ideas from audio to text to make them more accessible, this is undeniably a powerful application of AI.
I’ve added more information on the process in the Notes below. But without further ado. here’s ChatGPT’s first person synthesis of the conversation I had with Brian — with footnotes of course, because I couldn’t resist the temptation to editorialize just a bit!
Playing into the Future: Rethinking Higher Education in the Age of AI
Extracted and synthesized by ChatGPT from a conversation with Andrew Maynard on the AI for U podcast
In higher education today, we stand on the edge of profound transformation. I've been working with artificial intelligence closely, and every day reveals new potentials—and new threats—that weren't apparent just yesterday.2 The rapid evolution of AI means that if you haven't updated your knowledge in the past week, you're likely already behind.3
I'm increasingly concerned about the very core of what universities do—not only teaching but research. AI’s ability to handle intellectual tasks traditionally reserved for human scholars is astounding. Its capacity for generating insights is far surpassing our own abilities, forcing us to rethink our place in the intellectual landscape.4
It’s easy to dismiss these developments as mere hype, but the reality is we're already witnessing changes profound enough to threaten our professional identities. There is a real risk that, in the absence of proactive engagement, scholars could become "intellectual hobbyists," individuals indulging in thinking merely as an artisanal pursuit, while AI assumes responsibility for meaningful, impactful intellectual work.5
So how do we address this shift? The key lies not in opposition, but coexistence. AI offers us transformative power, enabling educators and researchers to multiply their impact. But unlocking this potential demands a radical shift in perspective. Institutions must start viewing AI not simply as tools, but as catalysts for creativity and reinvention.
At Arizona State University, we have embraced this opportunity with an open-minded attitude. When AI first entered the academic sphere, our sector largely approached it with apprehension. Early on, conversations were dominated by anxiety about academic integrity—"will students cheat?"—instead of exploring how AI could revolutionize education itself. However, we quickly pivoted, choosing instead to explore how AI could enhance both teaching and learning. We developed tools to assist faculty in creating syllabi, tests, and rubrics, simple yet impactful ways that AI began reshaping our academic environment.6
However, the world of academia moves slowly, often resisting rapid innovation. Institutional inertia—the comfort of routine and resistance to change—poses one of the greatest challenges. Faculty often express reluctance, feeling too overwhelmed, too entrenched in existing methods, or simply too old to embrace these technologies. Yet the danger of waiting to engage is immense. The pace of AI's growth means hesitation today could lead to irrelevance tomorrow. The challenge for leadership is not to dictate change from the top but to meet people where they are, creating pathways forward that feel both manageable and inspiring.
How do we encourage genuine engagement with AI? The answer is surprisingly playful. I frequently draw upon an analogy from MIT’s Media Lab, encouraging institutions to create AI playgrounds rather than playpens.7 A playpen is controlled and constrained—safe, but restrictive. A playground, in contrast, is an open, exploratory space where people freely experiment, learn, and innovate. Giving students, faculty, and staff room to experiment with AI without rigid boundaries fosters organic creativity, sparking bottom-up innovation. This is where the real breakthroughs emerge.
Yet there's another, more existential consideration for higher education institutions. As knowledge becomes universally accessible, the traditional role of universities as gatekeepers of information is increasingly obsolete. If anyone can access high-quality information instantly through their smartphone, what unique value do universities provide? Institutions must critically assess their relevance in this new environment, finding ways to meaningfully contribute beyond merely transferring knowledge.
Public universities, in particular, have a responsibility to lead societal adaptation to the emerging AI age. Our mission must evolve toward guiding society through this transition, exploring how communities can thrive with AI, not in spite of it. As thought leaders, we have a responsibility to help shape the future in positive, human-centered ways.
Looking ahead, the societal impacts of AI are unpredictable yet certain to be significant. In areas such as supply chains, medicine, and even personal decision-making, AI’s role will soon become central. Automated research labs capable of accelerating scientific discovery beyond human capacity alone might be commonplace within just a few years. These developments challenge our conventional structures, yet they also present unprecedented opportunities to redefine how we contribute intellectually and socially.
But to harness this potential, we must first overcome the fear of obsolescence.8 We need to be brave enough to relinquish old ways, fostering environments that reward exploration, imagination, and bold thinking. The future of higher education doesn't have to be a dystopia of irrelevance. It can be a thriving ecosystem of innovation, where AI doesn't replace our humanity but enhances our greatest strengths.
Ultimately, the choice is ours: adapt boldly and thrive, or cling to tradition and risk becoming irrelevant. The path ahead isn't defined by the technology itself, but by how creatively we decide to engage with it.9
Notes
In working with ChatGPT to create this synthesis I first used OpenAI’s Whisper AI V2 model to transcribe the podcast and identify/assign speakers.
The transcript file (minus the podcast intro and outro) was then cleaned up (which just involved clarifying a couple of speaker assignments) and passed it over to ChatGPT.
Using the OpenAI o1 Pro model, ChatGPT was given the following prompt:
I've attached the transcript from a podcast. I would like you to write a 700 word article based on Andrew Maynard's comments, capturing the main big and novel ideas that he he explores. Please ensure that this is coherent and in the voice of Andrew, but stand alone from the podcast
I'd like you to write this article as if it is Andrew Maynard writing it in the first person.
Finally, I read the resulting text to ensure that I could comfortably stand by everything that was said.
As I mention in the main post, I’m not sure how comfortable I am with this process — but then I’m fiercely protective of my writing as it’s part of my professional identity. But read as an op ed, ChatGPT did a very good job of extracting and synthesizing the ideas we discussed in the podcast into a short and insightful piece.
And at the end of the day, it is a very efficient way of making ideas and thinking accessible to an audience who reads, but who wouldn’t be seen dead clicking a link to a podcast 😁
This is me taking my own advice and quite literally playing and experimenting with AI to see where it takes me!
I explicitly asked ChatGPT to write in the first person as me, reflecting my style and tone in the interview.
This is close to a phrase I used in the interview and I’ve used elsewhere — I would like to think I use it with more panache!
I tend to wrap my thinking and writing in more caveats than ChatGPT does here — it’s interesting to read how I might come across if I used my own caveat filter!
If I was being nit picking I’d critique ChatGPT for mixing up narratives around education, research, and academic identity — although I suspect I was the culprit here in the interview. And to be fair, they are deeply intertwined, and it’s reasonable to interpret the framing of higher education to encompass all three.
I love the way that ChatGPT has cleaned up my affirmative support for what ASU is doing here — it’s a veritable PR engine!
See my previous post at https://futureofbeinghuman.com/p/ai-playgrounds-in-higher-education — this was the rabbit hole I fell down as I was trying to write today’s post!
I’m sure I didn’t use this phrase - but I like it, and will use it from now on!
I’m not sure the piece is completely in my voice. But this definitely has the feel of an op ed written by me and passed through the filters of multiple editors. And it conveys my thinking here impressively well.
I'm playing with this in an MBA course I am subject coordinator and lecturer on. Cultivate conditions for symmathesy. Creation relational capital. Orient towards emergence and playfulness. Use group work, applied learning projects and a challenged based learning process. It's messy but beautiful. It also breaks the uni bureaucracy in many ways... but hey, we do have to break the patterns that no longer serve don't we?
More broadly we are also just institutionally gaslighting younger generations. It's gotta stop.