ChatGPT shaming is a thing – and it shouldn't be
There's a growing tension between early and creative adopters of text based generative AI and those who equate its use with cheating. And when this leads to shaming, it's a problem.
A colleague was recently finalizing a co-authored paper, and to ensure the abstract was clear, concise, and on-point, they co-opted the help of ChatGPT.
On mentioning this to their collaborator though, they were all but accused of cheating.
They had been ChatGPT shamed!
This isn’t the first time I’ve come across this type of response to the use of ChatGPT. There’s a growing tension in collaborative and professional settings between people who are actively leveraging text-based generative AI, and those who assume there’s something slightly dirty or underhand about this. And when this comes to a head, people are being shamed for using AI.
In the case above, ChatGPT was being employed as a professional tool to support someone’s expertise, not as a substitute for it. ChatGPT Enterprise was being used, so there was no chance of data leakage. They were working with ChatGPT to help draft and refine a summary, not to generate original content. And they were assessing and refining text generated by ChatGPT based on their own knowledge.
In other words, they were using the technology as a professional tool that increased the quality of their work and it’s potential impact, while optimizing their time.
This will sound familiar to anyone who’s incorporating generative AI into their professional workflows. But there are still many people who haven’t used apps like ChatGPT, are largely unaware of what they do, and are suspicious of them. And yet they’ve nevertheless developed strong opinions around how they should and should not be used.
These opinions often frame ChatGPT and similar apps as “cheating,” “taking shortcuts,” being “professionally underhand,” and “acting unethically.” And when triggered, they can quickly lead to intentional or inadvertent shaming of colleagues who are assumed to be acting inappropriately.
The irony is that AI is already deeply integrated into most professional workflows — and is rapidly becoming more so. Anyone relying on autocorrect, autofill, or grammar assistants like Grammarly, is using AI. When I use Microsoft Outlook it prompts me with what it thinks I want to say next as part of Microsoft’s integration of AI into its products. You can now ask LinkedIn to use AI to assist with writing posts. The video conferencing platform Zoom will summarize meetings and even allow you to ask questions about what went on using AI
Even as I write this I have the option of asking Google Chrome to help me — using AI!
Of course, just because these capabilities exist does not mean that we should blindly accept them. But they do highlight a growing disconnect between growing use of generative AI in professional settings, and a simultaneous rejection of it.
I suspect that this has many people scratching their heads where they’re finding that using apps like ChatGPT substantially increases their productivity. To those who work with ChatGPT constantly open in a browser window, or have an AI app on the equivalent of speed-dial on their phone, or who regularly use applications like Perplexity, accusations of cheating must surely feel inappropriately accusatory.
To many of them, I suspect it’s a bit like telling a chemist we should all be chemicals-free — to which the response is inevitably a confused “but everything’s made of chemicals!”
What I suspect we’re seeing is a lack of understanding around emerging AI capabilities, combined with deep-seated fears that the technology is challenging tightly held notions of how the world should be — fueled by echo-chamber rhetoric on social media and elsewhere that only reinforces a priori beliefs.
And while I’m focused on professional collaborations here, concerns of shaming also spill over to education, where ill-articulated (or zero articulated) expectations can lead to students who believe they are being resourceful being shamed and even penalized.
To be clear, there are many challenges to how AI is being developed and used that need to be articulated and addressed, and not blithely swept aside. Yet to simply place colleagues (or even students) into a “bad behavior” bucket when they use generative AI in ways you don’t approve of, is not helpful.
So what could we be doing differently?
To start with, it’s probably important to acknowledge that there are few right and wrong answers to the appropriate use of text based generative AI. And this means that, for any collaboration or collaborative environment to be effective, there need to be clearly articulated expectations and ground rules around the use of tools like ChatGPT.
In the account of ChatGPT shaming above, my colleague equated the challenges here to challenges over author inclusion and order when co-writing academic papers — something that can become a minefield if not discussed up front.
It’s a helpful analogy for academics. But I suspect that for most people there needs to be something more concrete if ChatGPT shaming is to be avoided — something like a set of guidelines that can help avoid difficult situations down the line.
These will most likely need to be context specific. But to kick things off I thought I’d try the following out for size — with the proviso that this is just the start of a larger conversation (and a reminder that comments are always welcome):
Acknowledge that there’s a scale of comfort around using generative AI that stretches from it being equated with cheating, to it being a powerful productivity tool — and that opinions along this scale are not necessarily right or wrong;
Discuss and agree on boundaries and expectations around the use of generative AI at the start of any collaboration;
Do not be judgmental over how colleagues and collaborators use generative AI, recognizing that their perspective and experience may differ from your own;
Become familiar with how generative AI is being used in professional environments, including where and how it can be a useful tool and what its limitations are;
Try to avoid making decisions on generative AI use based on assumptions, beliefs, and hearsay;
Collaborate with humility, and be willing to change your perspective in the light of new information; And
Do not ChatGPT shame collaborators and colleagues — either for using generative AI, or not using it!
The reality is that AI — generative AI in particular — is changing the world we live and work in faster than we can sometimes keep up with, and this is leading to inevitable tensions around how it’s used and how it is not.
Working through these tensions and the technology transition that’s driving them will be complex and often challenging. And these challenges are only going to escalate as generative AI is increasingly integrated into the devices we use.
Yet there’s a lot that can be done to reduce tensions and find positive ways forward by experimenting creatively, questioning without judgement, and collaborating with humility.
And, of course, not ChatGPT shaming each other!
Postscript
As I was finalizing this article I looked through Ethan Mollick’s recent book Co-Intelligence: Living and Working with AI to see if it offered any insights here.
Mollick is a strong proponent of embracing AI — but he’s also thoughtful in his writing, and and acknowledges the limitations and challenges as well as the potential of platforms like ChatGPT.
Mollick doesn’t directly address ChatGPT shaming. But he does provide a highly accessible introduction to text-based (or natural language) generative AI that I’d highly recommend anyone reading before they make snap judgements on the use of the technology — and the moral character of those that use it.
In particular, his “four rules for co-intelligence” provide a useful framing that I think will help open up conversations around use cases for ChatGPT and similar platforms in professional settings.
These are:
Principle 1: Always invite AI to the table
Principle 2: Be the human in the loop
Principle 3: Treat AI like a person (but tell it what kind of person it is)
Principle 4: Assume this is the worst AI you will ever use
Ethan admits that Principle 2 is controversial — and not everyone will agree with this one of the other three. But if nothing else, they’re a great way of steering conversations away from the ChatGPT shame game and toward a mutual understanding of appropriate uses and behaviors.
What an insightful and timely reflection on the evolving landscape of professional collaboration in the age of AI. The scenario you described, where the use of ChatGPT in drafting a paper's abstract led to accusations of cheating, resonates deeply with the experiences of many professionals navigating this new terrain.
Your call to establish clear guidelines and expectations around the use of generative AI tools like ChatGPT at the outset of collaborations is particularly salient. By fostering open dialogue and setting shared understandings, we can mitigate misunderstandings and prevent instances of unwarranted shaming.
Furthermore, your emphasis on humility and a willingness to adapt our perspectives in light of new information is crucial. As AI continues to reshape our workflows and professional landscapes, it's imperative that we approach its integration with an open mind and a readiness to learn.
The analogy to the inclusion and order of authors in academic papers aptly underscores the importance of proactive communication and clarity in collaborative endeavors. Just as discussions around authorship prevent conflicts, conversations about the use of AI tools can preempt misunderstandings and promote productive partnerships.
Ultimately, as you eloquently put it, embracing a mindset of creative experimentation and collaborative humility is key to navigating the complexities of AI integration. Let's endeavor to foster environments where colleagues feel empowered to leverage AI tools without fear of judgment or stigma. Thank you for sharing your insights and fostering this important conversation.
--this comment was written with the aid of a large language model. it is of course a sincere and accurate reflection of my personal views. No shaming!--
A few thoughts:
(A) If I can add a #8, "Fully and transparently disclose where and how GenAI was used, to all parties anywhere downstream." Discussing with collaborators is good, but I'd argue that readers/editors/etc also have rights to know, in the same way we expect people to cite their sources. This is potentially complex (e.g., when spell check moves to GenAI), but really important IMO.
(B) If I can add a #9, it's "don't let yourself commit a Motte and Bailey." I think folks are very eager to reap the benefits of using GenAI for their advantage... and then particularly excited to be able to throw GenAI under the bus when it goes wrong rather than taking personal accountability. The Air Canada chatbot lawsuit is a quintessential example of this: Air Canada is delighted to exploit AI to cut its costs, reduce labour, etc... but when it says something obviously wrong, it reveals a second huge benefit... Air Canada wants to use it as a liability reduction device, where the AI can be blamed and they can avoid any culpability/exposure/responsibility for it going wrong. This is a sort of Motte & Bailey, or cake-and-eat-it-too that I think is really dangerous. If you want to use GenAI, you have to personally own the mistakes it produces. If you don't want to own the mistakes, you need to not use it. You can't have it both ways.
(C) Like Guy, I'm not sure 'shaming' is the most useful framing. Much like "murder" is "death + you're wrong to do it" (i.e., if someone can label something murder, it closes any further discussion, because the definition de facto makes it wrongful), "shaming" has both the layer of judgement and an implicit "wrongful/inappropriate/unkind" layer. It strikes me as though the wrongfulness of the judgement likely varies a lot from context to context, and changes a lot based on how the nature of the judgement itself. Debating whether it's okay to judge uses of GenAI leaves a lot more room for legitimate disagreement than "can we shame people?" (which, obviously, no, because shaming people is bad). Of course, I get how the provocative superlative is good for engagement, as I'm here commenting as a result, aren't I :p... I just hate that about online discourses!
(D) I don't think it's fair to take an entirely symmetrical use to "you do you; no perspectives are more or less right than another" with GenAI in the current configuration of society. While the "talk with your collaborators and get authentic, honest consent" is a nice aspirational goal, respecting each others perspectives and preferences isn't symmetric in the harms it might produce.
To be clear, I'm a proponent (e.g., I use GenAI in class activities to help build familiarity and teach responsible use), but... the potential harms aren't equal here. For instance, a co-author using ChatGPT is exposing their collaborator to potential significant risks (e.g., having a paper retracted for hallucinated citations), while the non-AI user is only exposing their collaborator to... maybe writing papers slower? A programer using ChatGPT is exposing not just their collaborators to the risk of lawsuits if the tech goes wrong, but also potentially vulnerable road users, medical patients, etc to much more material consequences with no ability to consent.
As a non-AI example, consider a hockey player taking performance enhancement drugs. We can have all sorts of interesting conversations until the cows come home about whether these drugs are good or bad, right or wrong, whatever. But, the reality is that the player taking them is exposing their team to all sorts of potential harms and risks without their consent, entirely asymmetrically.
While the list is ostensibly neutral, it leans very heavily towards "people should get more comfortable with AI" rather than, say, "GenAI boosters should stop and think about what earnest, informed consent means, and the ways that pushing people to 'open their minds to a new technology' might undermine genuine consent." I reckon the latter is not only more useful for thinking about AI than "learn to embrace new technologies!"... but also more helpful as a general skill for literally every other topic in life, too.