If you're obsessed with ChatGPT's accuracy, you're missing the point
Having just finished teaching one of the country's first undergraduate courses on using ChatGPT, I'm more excited than ever by how the tool stimulates creative and critical thinking
I’ve spent the past six weeks reading over 2,000 conversations between ChatGPT and a bunch of undergraduate students, and I’ve come out the other side though with more respect than ever for how generative AI is poised to transform learning.
This was the inaugural offering of my online course on basic prompt engineering. And with 72 students completing 30 graded assignments each — many of them using ChatGPT — I feel like I’ve undergone a ChatGPT baptism of fire.
I must confess that, when I designed the course, I didn’t think about how much work I was creating for myself. This has been a mountain of a task, with a good amount of time spent each day reading and reviewing wide ranging conversations between students and ChatGPT (all using GPT4). I’ve been blown away though by just how insightful these conversations have been in helping me understand the potential value of text-based generative AI platforms when working with students.
Naturally, the limitations of the platform have been very clear through the class — and this is something I explicitly teach. ChatGPT makes stuff up, it gets things wrong, it’s inconsistent, and sometimes it’s frustratingly hard to get it to do what you want it to.
And yet despite this I’m left with an overwhelming realization that, for all its limitations, ChatGPT is a profoundly effective catalyst for engaged and creative thinking. In fact, it seems that it’s effective because of its limitations in some cases, rather than despite them.
Rather than undermining the learning experience or encouraging mindless assignment completion, I saw clear evidence that class participants’ interactions with ChatGPT lead to them thinking more deeply and creatively. It brought out their curiosity and helped refine their ability to ask questions and test the answers. And it stimulated their critical thinking and reasoning — even when they were frustrated by what ChatGPT was not able to do!
I had students doing deep and informative dives into topics that intrigued them, making serendipitous connections that surprised and delighted them, developing their own thinking by using ChatGPT as a sounding board, and engaging broadly in self-directed learning. I even had a number of students who struggle with conventional educational approaches tell me how transformative ChatGPT was in creating a learning environment that was uniquely responsive to them.
It was as if, reading these conversations and the associated assignments, working with ChatGPT turned on a light bulb inside my students’ heads in ways that I rarely see in other classes.
Of course this wasn’t completely unexpected. A lot of thought went into how the course was crafted and constructed, and it was intentionally designed to provide a learning experience that stimulated thinking. What I didn’t anticipate was just how effectively engaging with ChatGPT would enhance this.
There’s growing evidence that this isn’t an isolated example of generative AI stimulating creativity. As I was writing this article, Ethan Mollick posted a piece that opened with:
“The core irony of generative AIs is that AIs were supposed to be all logic and no imagination. Instead we get AIs that make up information, engage in (seemingly) emotional discussions, and which are intensely creative.”
Ethan’s piece draws on three papers that indicate that generative AI is a powerful engine of creativity. This aligns with my own in-class observations. But my sense is that the power of generative AI goes beyond creativity alone.
ChatGPT and other large language model-based platforms tap into deep wells of human knowledge and insight — far more effectively than any individual can — and yet they are incapable of simply parroting facts and figures. All the while they’re incredibly articulate and responsive to their user’s level of understanding.
Because of their limitations, meaningful interactions with text-based generative AI demands that users engage their critical thinking skills — at least if they understand what they are doing. But as they do, they open up access to a wealth of ideas, connections, and concepts, that otherwise might remain hidden to them.
This process, it seems, is aided by two things: the ability to engage in long and coherent conversations with these platforms, and the ability they have to articulate ideas and concepts in a language and at a level that makes them exceptionally accessible and relevant.
What results is a mode of learning that isn’t about facts and figures, but rather is about exercising and developing creative and critical thinking as users engage with brilliant but flawed AI systems that are able to connect with them on a very human level.
The irony of course is that this a central tenet of modern education — it’s not about being told what to think, as much as it’s about learning how to think. And yet, judging by the deluge of articles and social media posts slamming ChatGPT and other generative AI platforms for getting the facts wrong, you’d be forgive for thinking that this is all that matters.
Rather, what this course has shown is that, when used appropriately, generative AI can help create a learning environment that switches students’ minds on, and helps them develop their own understanding in ways that are near-impossible to achieve otherwise.
Of course, to take advantage of this, we need to move beyond the rather unimaginative idea that, because ChatGPT can be fooled into thinking 2 + 2 doesn’t equal 4, it’s a failed technology. And we need to embrace the idea that engaging with a knowledgeable and articulate yet imperfect AI companion can be powerfully transformational.
This, though, requires developing a level of AI literacy that leads to the intelligent and intentional use of ChatGPT and other platforms in learning — because there are plenty of “not smart” ways of using the technology! This means ensuring that instructors understand how to use generative AI in ways that enhance learning, as well as ensuring that students have a level of understanding and skills that empower them to thrive in a post-ChatGPT world.
Of course, I may still be suffering from an overdose of ChatGPT and will come to my senses in a day or two. But I don’t think so. For all their flaws, there’s something unexpected and emergent about generative AI that shakes the cage of conventional thinking when it comes to learning and education. In fact I strongly suspect that it’s the flaws which, in many cases, make them useful!
The challenge though is whether we’re able to transcend conventional thinking — whether it’s mired in outmoded perspectives on education, or limited by narrow minded assumptions of what AI should do — and recognize that what generative AI is really good at isn’t about accuracy, but about how it enables us to think and learn differently.
Coda
As I was wrapping up this piece, a flurry of articles started coming out questioning the utility, longevity, and even morality, of generative AI platforms like ChatGPT. These included a worth-reading essay from Gary Marcus on the economic and technological viability of ChatGPT (titled “What if Generative AI turned out to be a Dud?) and an assessment in Analytics India suggesting that Open AI may go bankrupt by the end of 2024. Then a long article was published this weekend in Rolling Stone highlighting the work of five women in AI — Timnit Gebru, Buolamwini, Safiya Noble, Rumman Chowdhury, and Seeta Peña Gangadharan — and their warnings around the the norms, biases, and morally questionable processes, that are baked into technologies like ChatGPT.
I mention these as, despite my enthusiasm over the potential of generative AI, the landscape around generative AI is becoming increasingly complex. And it is by no means certain that technologies such as ChatGPT that look as if they could be transformative in education, will deliver on their promise.
Despite this, I remain optimistic. There are serious issues and challenges here that need to be addressed. And yet, from a learning and education perspective, I would still argue that we are seeing capabilities emerge that could change the face of how people learn; who gets access to teaching that’s tailored to their needs; and how education is scaled beyond it’s current institutional limits. As I argue above, I don’t believe that hallucinations are a limiting factor (although greater accuracy and trustworthiness would be good). And even if there’s an economic hiatus in the development of generative AI, we’ve already stepped over the threshold of understanding that will ensure the underlying technologies continue to evolve.
But there are risks here — moral as well as social and economic. And realizing the potential of ChatGPT and other generative AI platforms in learning and education will require open eyes, critical minds, and a commitment to ensuring the technology serves society, rather than the other way round.
SPOT ON!
Great article. It would be great to have you share experience with our faculty. I’ll connect on LinkedIn . My experiments on my newsletter here, in case you are interested