AI in Higher Education: Students need playgrounds, not playpens
AI capabilities are moving so fast, and the implications are so profound, that we restrict the ability of students to learn through curiosity, experimentation, and hands-on experience at our peril.
This is not the Substack I set out to write this morning. I was intending to post about a conversation I recently had about the future of higher education with Brian Piper on the AI for U podcast. But as I started writing, the article morphed into a piece about playgrounds and playpens — and AI.
Playgrounds and playpens, it turns out, are a powerful metaphor for thinking about AI and education. And they’re a metaphor I’ve been thinking and talking about for some time now.
Inspired by Mitch Resnick at MIT and the work of Marina Umaschi Bers which, in turn, inspired him, I wrote about the metaphor last year in the broader context of undergraduate education. Since then I’ve become increasingly interested in how it opens up ways of thinking about approaches to AI and learning/education where the technology is challenging nearly every aspect of not only how we teach and what we teach, but why we teach.
I’ll come back to the broader conversation with Brian in a follow-up post. But for now I’ll follow the story and dive deeper into metaphor of playgrounds and playpens in education, and how it’s potentially useful in thinking about artificial intelligence.
As a starting point, it’s worth taking a moment to think about why AI presents such a unique challenge and opportunity to learning and education — especially in universities.
Since ChatGPT hit the public scene in 2022, generative AI has impacted nearly every part of higher education. In some cases this has led to new AI tools being embraced by educators and administrators. In others there’s been active resistance to AI in any form being used in teaching or by students. And of course there are the continuing fears that AI makes it easier for students to cheat, or to become lazy learners, or simply not to retain understanding that comes from using AI as a learning aid.
But whether you’re an AI optimist, an AI pessimist, or simply in denial, it’s nearly impossible to ignore the reality that AI is having a substantial and growing impact on learning and education.
At the same time, we’re seeing a large gap in understanding between where the leading edge of AI capabilities are, and where educators think they are. As a result, there’s still a tendency to think of AI as a tool that can complete assignments or write essays, or create personalized learning environments, or simply act as a form of Google on steroids.
Yet the reality is much more complex — and much more transformative. Because advanced AI models are becoming increasingly capable of simulating aspects of ourselves that define us at a fundamental level — such as the ability to think, to reason, and to solve problems with agency — they stand apart from pretty much any previous technology or tool that we’ve created.1 And because of this, they cannot be approached as just another technology to teach students about, or another tool to enhance traditional approaches to education.
Rather, we’re seeing a growing need for completely new ways of thinking about the intersection between learning, education, and AI — especially where educators are sometimes (perhaps often) further behind the curve than the students they’re trying to educate.
And this is where the perspective shift inherent in moving from a playpen to a playground mentality becomes useful — and important.
It’s something of an oversimplification, but curiosity and problem-solving are close to the heart of how we learn.2 These innately human attributes are often used to great effect by educators as they help students acquire specific skills and understanding.
More often than not though, learning environments that are grounded in curiosity and problem solving are constrained. They draw on and leverage these traits, but they are are often explicitly designed to ensure students follow a carefully curated path to achieving specific goals and outcomes.
They are, in effect, metaphorical playpens. Some creative play and problem solving is allowed. But just as a playpen wraps a young child in constraining walls and presents them with a few select toys to channel their attention, these metaphorical learning playpens are designed to channel and restrict learning along narrow lines.
It’s a mode of teaching that works well where the purpose and goals are clear, the journey is well-trod, and the desired outcomes are easily assessed. But it quickly falls apart where the the goals and purpose are unclear, the journey is breaking new ground, and no-one’s quite sure what the desired outcomes are — never mind how they should be assessed.
This is very much like the situation we find ourselves in with AI. This is a technology where the speed of development is far outstripping the far more sedate pace of pedagogical reform; where informal exploration by students means they already know more than their instructors in many cases; and where we’re still struggling to grasp the full implications of machines that emulate and exceed some of the most fundamental aspects of what it means to be human.3
This is where playpen-style learning environments — which still assumes that expertise and authority lies with the instructor — run out of steam very fast. And it’s also where the metaphor of the playground becomes interesting.
In contrast to a playpen, a playground is relatively unbounded. There are rules of course — don’t be stupid for instance, be kind, don’t spoil things for others. But the idea of the playground is to empower users to learn from experience — and from each other; to flex their imagination; to be creative; to try new things; to make mistakes; to ask questions; and to creatively solve problems.
Playgrounds are environments that open up rather than constrain options, and that allow curiosity and creativity to be transformed into invention and innovation. Yet they are not totally without bounds. Rather, playgrounds are carefully designed and curated to stimulate curiosity, imagination, and play. And through play, learning.
And this is where the analogy comes back to AI and higher education. Rather than trying to control how students explore, use, and think about AI, I suspect we should be giving them more opportunities to explore and play — to make mistakes, discover new possibilities, to invent, and to innovate. And we should perhaps be thinking of educators — in this context at least — as guides, mentors, and fellow-travelers, rather than the fount of all knowledge.
This is, of course, a risky strategy — especially as it means relinquishing some control over the learning environment. But given how rapidly AI is advancing, the greater danger I suspect is in holding students back because of misplaced ideas about how and what they should learn.
This becomes all the more important where any playpen-like constraints placed around AI in education reflect misconceptions around how AI is beginning to transform our lives. As I noted above, AI is no longer simply about generating text in respond to a prompt — and I’m not sure it ever was. Rather, it’s a collection of technologies that are deeply and fundamentally changing what we do, where we live, and even who we are — in ways that no-one fully understands yet.
As a result, we are all on a journey together as we explore and learn how to develop and use rapidly advancing AI capabilities to enhance and improve lives in ways that lie far beyond conventional thinking and understanding. And this includes educators as well as students. And one of the ways we can approach this is to adopt a playground mindset rather than a playpen attitude to AI and learning — and provide students with the space and opportunities to flex their curiosity and hone their problem-solving skills without unnecessary constraints or restraints.
What this might look like is, of course, part of the journey. I suspect there are as many types of AI learning playgrounds as there are people imagining and creating them. But I suspect that we need to get serious about at least the following four areas:
Fostering a culture of AI-play for all that encourages experimentation and risk taking while avoiding unacceptable harm (the norms and rules of the playground);
Ensuring students have free and easy access to a range of cutting edge AI technologies (access to the playground);
Ensuring students have permission to play with these technologies with very few expectations or constraints (not being a playground killjoy); and
Ensuring that there are mechanisms in place to reinforce learning that comes from playing with advanced AI tools and capabilities (designing playgrounds with intention).
I’m interested in whether anyone is already creating spaces like this for students to learn through play with AI in higher education. I hope they are, as otherwise it’s hard to imagine how we’ll make serious progress in equipping our students to thrive in an AI future.
And please do look out for the follow-on post where I’ll be focusing more broadly on my conversation with Brian while also “playing” with AI in my own Substack playground!
Mark Daley has a great article out on how AI is categorically different from any preceding technology. As he writes, “Steam engines didn’t challenge our claim to unique intelligence. Telegraph wires didn’t ask us to rethink the nature of human thought. Electricity didn’t insinuate that it might rival or surpass our creative spark. The changes AI brings are not limited to external upgrades in productivity or convenience; they reach inside, prodding us to question the crux of human identity.” AI isn’t running water: It's something different. March 13, 2025
They are also core to what makes us “us.”
This is beyond this particular article, but there is a categorical error I believe in treating a technology that fundamentally challenges our thinking about who we are, and the very nature of what it means to be human, as a leaning aid.
This is right on target, Andrew, like so much of your work. But this one in particular.