ChatGPT shaming is a thing – and it shouldn't be
There's a growing tension between early and creative adopters of text based generative AI and those who equate its use with cheating. And when this leads to shaming, it's a problem.
A colleague was recently finalizing a co-authored paper, and to ensure the abstract was clear, concise, and on-point, they co-opted the help of ChatGPT.
On mentioning this to their collaborator though, they were all but accused of cheating.
They had been ChatGPT shamed!
This isn’t the first time I’ve come across this type of response to the use of ChatGPT. There’s a growing tension in collaborative and professional settings between people who are actively leveraging text-based generative AI, and those who assume there’s something slightly dirty or underhand about this. And when this comes to a head, people are being shamed for using AI.
In the case above, ChatGPT was being employed as a professional tool to support someone’s expertise, not as a substitute for it. ChatGPT Enterprise was being used, so there was no chance of data leakage. They were working with ChatGPT to help draft and refine a summary, not to generate original content. And they were assessing and refining text generated by ChatGPT based on their own knowledge.
In other words, they were using the technology as a professional tool that increased the quality of their work and it’s potential impact, while optimizing their time.
This will sound familiar to anyone who’s incorporating generative AI into their professional workflows. But there are still many people who haven’t used apps like ChatGPT, are largely unaware of what they do, and are suspicious of them. And yet they’ve nevertheless developed strong opinions around how they should and should not be used.
These opinions often frame ChatGPT and similar apps as “cheating,” “taking shortcuts,” being “professionally underhand,” and “acting unethically.” And when triggered, they can quickly lead to intentional or inadvertent shaming of colleagues who are assumed to be acting inappropriately.
The irony is that AI is already deeply integrated into most professional workflows — and is rapidly becoming more so. Anyone relying on autocorrect, autofill, or grammar assistants like Grammarly, is using AI. When I use Microsoft Outlook it prompts me with what it thinks I want to say next as part of Microsoft’s integration of AI into its products. You can now ask LinkedIn to use AI to assist with writing posts. The video conferencing platform Zoom will summarize meetings and even allow you to ask questions about what went on using AI
Even as I write this I have the option of asking Google Chrome to help me — using AI!
Of course, just because these capabilities exist does not mean that we should blindly accept them. But they do highlight a growing disconnect between growing use of generative AI in professional settings, and a simultaneous rejection of it.
I suspect that this has many people scratching their heads where they’re finding that using apps like ChatGPT substantially increases their productivity. To those who work with ChatGPT constantly open in a browser window, or have an AI app on the equivalent of speed-dial on their phone, or who regularly use applications like Perplexity, accusations of cheating must surely feel inappropriately accusatory.
To many of them, I suspect it’s a bit like telling a chemist we should all be chemicals-free — to which the response is inevitably a confused “but everything’s made of chemicals!”
What I suspect we’re seeing is a lack of understanding around emerging AI capabilities, combined with deep-seated fears that the technology is challenging tightly held notions of how the world should be — fueled by echo-chamber rhetoric on social media and elsewhere that only reinforces a priori beliefs.
And while I’m focused on professional collaborations here, concerns of shaming also spill over to education, where ill-articulated (or zero articulated) expectations can lead to students who believe they are being resourceful being shamed and even penalized.
To be clear, there are many challenges to how AI is being developed and used that need to be articulated and addressed, and not blithely swept aside. Yet to simply place colleagues (or even students) into a “bad behavior” bucket when they use generative AI in ways you don’t approve of, is not helpful.
So what could we be doing differently?
To start with, it’s probably important to acknowledge that there are few right and wrong answers to the appropriate use of text based generative AI. And this means that, for any collaboration or collaborative environment to be effective, there need to be clearly articulated expectations and ground rules around the use of tools like ChatGPT.
In the account of ChatGPT shaming above, my colleague equated the challenges here to challenges over author inclusion and order when co-writing academic papers — something that can become a minefield if not discussed up front.
It’s a helpful analogy for academics. But I suspect that for most people there needs to be something more concrete if ChatGPT shaming is to be avoided — something like a set of guidelines that can help avoid difficult situations down the line.
These will most likely need to be context specific. But to kick things off I thought I’d try the following out for size — with the proviso that this is just the start of a larger conversation (and a reminder that comments are always welcome):
Acknowledge that there’s a scale of comfort around using generative AI that stretches from it being equated with cheating, to it being a powerful productivity tool — and that opinions along this scale are not necessarily right or wrong;
Discuss and agree on boundaries and expectations around the use of generative AI at the start of any collaboration;
Do not be judgmental over how colleagues and collaborators use generative AI, recognizing that their perspective and experience may differ from your own;
Become familiar with how generative AI is being used in professional environments, including where and how it can be a useful tool and what its limitations are;
Try to avoid making decisions on generative AI use based on assumptions, beliefs, and hearsay;
Collaborate with humility, and be willing to change your perspective in the light of new information; And
Do not ChatGPT shame collaborators and colleagues — either for using generative AI, or not using it!
The reality is that AI — generative AI in particular — is changing the world we live and work in faster than we can sometimes keep up with, and this is leading to inevitable tensions around how it’s used and how it is not.
Working through these tensions and the technology transition that’s driving them will be complex and often challenging. And these challenges are only going to escalate as generative AI is increasingly integrated into the devices we use.
Yet there’s a lot that can be done to reduce tensions and find positive ways forward by experimenting creatively, questioning without judgement, and collaborating with humility.
And, of course, not ChatGPT shaming each other!
Postscript
As I was finalizing this article I looked through Ethan Mollick’s recent book Co-Intelligence: Living and Working with AI to see if it offered any insights here.
Mollick is a strong proponent of embracing AI — but he’s also thoughtful in his writing, and and acknowledges the limitations and challenges as well as the potential of platforms like ChatGPT.
Mollick doesn’t directly address ChatGPT shaming. But he does provide a highly accessible introduction to text-based (or natural language) generative AI that I’d highly recommend anyone reading before they make snap judgements on the use of the technology — and the moral character of those that use it.
In particular, his “four rules for co-intelligence” provide a useful framing that I think will help open up conversations around use cases for ChatGPT and similar platforms in professional settings.
These are:
Principle 1: Always invite AI to the table
Principle 2: Be the human in the loop
Principle 3: Treat AI like a person (but tell it what kind of person it is)
Principle 4: Assume this is the worst AI you will ever use
Ethan admits that Principle 2 is controversial — and not everyone will agree with this one of the other three. But if nothing else, they’re a great way of steering conversations away from the ChatGPT shame game and toward a mutual understanding of appropriate uses and behaviors.
In writing Fiction, There's a HUGE shaming on this topic from very vocal and polarized personalities. I often point out exactly what you articulated in that we already have AI baked into almost everything and it can be a huge asset without replacing human creativity.
For example, writing the back cover blurb for a book is one of the single hardest tasks for an author. Distilling a book down to a few paragraphs that can capture an audience is tough. Whereas most of my chapters have 4 versions at most, my back blurb has 15+ I finally ran a good copy through ChatGPT to see if it could help and it has some good adds, and some things it got wrong. But it did help.
But back to the humans in a panic. Most of them don't recognize how Artists also copy shamelessly. They just call it inspiration. Ironically, it's one of the most common interview questions for authors "Who inspired your book." Another way to read that is "whose work did you emulate (copy) in style, concept, world building, etc. to make your work faster and more accessible?"
I wrote about that in this essay on "AI Is[n't] Killing Artists"
https://www.polymathicbeing.com/p/ai-isnt-killing-artists
I think this is an interesting piece. I'm not sure that shaming is the right framing for it. Though I don't have a better word for this, to me shaming has some other implications, such as intentional cruelty and bullying.
I also wonder if "human in the loop" gives us the right perspective. I don't know where I ran across this, but maybe we should think in terms of "AI in the loop," at least for some things. We are the ones with agency. For most tasks, current AI either interacts with us, complements and supplements our abilities, or extends our agency. The kind of use your friend describes, using it for editing and polishing begins and ends with a human. It is the AI that is inside the "loop." That may sound trivial but small changes in framing can have larger effects.
The other thing that struck me about this is the assumption that it is always about lack of knowledge. There are sometimes other aspects of AI use that may enter into reactions to AI use, accusations not of cheating but of irresponsibility. These hinge more on the way it's commonly deployed these days, than to something inherent in AI. I have heard more than one discussion of savvy professors who use AI extensively, in which they question whether it is safe to upload their unpublished writing, grant proposals, etc. into an AI. These include people who are well-versed in AI, even at the technical level. Until those kinds of concerns are addressed, and in ways that everyone can afford, that's going to be a problem.
It is not always a case of misunderstanding how AI works, and what it can do, it is also possible that the mistrust of the companies, and the ways they have deployed their products, is a real stumbling block. Given the cost of secure AI, this is a real issue.