13 Comments

What an insightful and timely reflection on the evolving landscape of professional collaboration in the age of AI. The scenario you described, where the use of ChatGPT in drafting a paper's abstract led to accusations of cheating, resonates deeply with the experiences of many professionals navigating this new terrain.

Your call to establish clear guidelines and expectations around the use of generative AI tools like ChatGPT at the outset of collaborations is particularly salient. By fostering open dialogue and setting shared understandings, we can mitigate misunderstandings and prevent instances of unwarranted shaming.

Furthermore, your emphasis on humility and a willingness to adapt our perspectives in light of new information is crucial. As AI continues to reshape our workflows and professional landscapes, it's imperative that we approach its integration with an open mind and a readiness to learn.

The analogy to the inclusion and order of authors in academic papers aptly underscores the importance of proactive communication and clarity in collaborative endeavors. Just as discussions around authorship prevent conflicts, conversations about the use of AI tools can preempt misunderstandings and promote productive partnerships.

Ultimately, as you eloquently put it, embracing a mindset of creative experimentation and collaborative humility is key to navigating the complexities of AI integration. Let's endeavor to foster environments where colleagues feel empowered to leverage AI tools without fear of judgment or stigma. Thank you for sharing your insights and fostering this important conversation.

--this comment was written with the aid of a large language model. it is of course a sincere and accurate reflection of my personal views. No shaming!--

Expand full comment
author

No shaming :) And thanks Michael!

Expand full comment
May 13Liked by Andrew Maynard

A few thoughts:

(A) If I can add a #8, "Fully and transparently disclose where and how GenAI was used, to all parties anywhere downstream." Discussing with collaborators is good, but I'd argue that readers/editors/etc also have rights to know, in the same way we expect people to cite their sources. This is potentially complex (e.g., when spell check moves to GenAI), but really important IMO.

(B) If I can add a #9, it's "don't let yourself commit a Motte and Bailey." I think folks are very eager to reap the benefits of using GenAI for their advantage... and then particularly excited to be able to throw GenAI under the bus when it goes wrong rather than taking personal accountability. The Air Canada chatbot lawsuit is a quintessential example of this: Air Canada is delighted to exploit AI to cut its costs, reduce labour, etc... but when it says something obviously wrong, it reveals a second huge benefit... Air Canada wants to use it as a liability reduction device, where the AI can be blamed and they can avoid any culpability/exposure/responsibility for it going wrong. This is a sort of Motte & Bailey, or cake-and-eat-it-too that I think is really dangerous. If you want to use GenAI, you have to personally own the mistakes it produces. If you don't want to own the mistakes, you need to not use it. You can't have it both ways.

(C) Like Guy, I'm not sure 'shaming' is the most useful framing. Much like "murder" is "death + you're wrong to do it" (i.e., if someone can label something murder, it closes any further discussion, because the definition de facto makes it wrongful), "shaming" has both the layer of judgement and an implicit "wrongful/inappropriate/unkind" layer. It strikes me as though the wrongfulness of the judgement likely varies a lot from context to context, and changes a lot based on how the nature of the judgement itself. Debating whether it's okay to judge uses of GenAI leaves a lot more room for legitimate disagreement than "can we shame people?" (which, obviously, no, because shaming people is bad). Of course, I get how the provocative superlative is good for engagement, as I'm here commenting as a result, aren't I :p... I just hate that about online discourses!

(D) I don't think it's fair to take an entirely symmetrical use to "you do you; no perspectives are more or less right than another" with GenAI in the current configuration of society. While the "talk with your collaborators and get authentic, honest consent" is a nice aspirational goal, respecting each others perspectives and preferences isn't symmetric in the harms it might produce.

To be clear, I'm a proponent (e.g., I use GenAI in class activities to help build familiarity and teach responsible use), but... the potential harms aren't equal here. For instance, a co-author using ChatGPT is exposing their collaborator to potential significant risks (e.g., having a paper retracted for hallucinated citations), while the non-AI user is only exposing their collaborator to... maybe writing papers slower? A programer using ChatGPT is exposing not just their collaborators to the risk of lawsuits if the tech goes wrong, but also potentially vulnerable road users, medical patients, etc to much more material consequences with no ability to consent.

As a non-AI example, consider a hockey player taking performance enhancement drugs. We can have all sorts of interesting conversations until the cows come home about whether these drugs are good or bad, right or wrong, whatever. But, the reality is that the player taking them is exposing their team to all sorts of potential harms and risks without their consent, entirely asymmetrically.

While the list is ostensibly neutral, it leans very heavily towards "people should get more comfortable with AI" rather than, say, "GenAI boosters should stop and think about what earnest, informed consent means, and the ways that pushing people to 'open their minds to a new technology' might undermine genuine consent." I reckon the latter is not only more useful for thinking about AI than "learn to embrace new technologies!"... but also more helpful as a general skill for literally every other topic in life, too.

Expand full comment
author

Thanks Eric -- really insightful comments, and why I hoped people would jump in here!

There's clearly a lot more to be unpacked here than I touched on -- which doesn't surprise me -- and really underpins the need for more open discussion here to establish norms, expectations and guidelines -- it worries me how little we're doing this (or having the chance to do it) in academia.

I am interested in what a better framing is than shaming. I agree that this doesn't hit the mark more broadly, but as I mentioned to Guy, the feelings that some of these incidents elicit are definitely feelings of shame and embarrassment; a reminder that to make someone feel shame doesn't have to have intent behind it, but is potentially damaging nevertheless

Expand full comment
May 13Liked by Andrew Maynard

Great post! I'm a complete hypocrite here.

As a K12 teacher, AI tools are being forced at us. Why lesson plan and perfect your craft of teaching when AI can do it for you? I hate that.

Yet on my own, I use Chat GPT all the time. I've kept a database of misspelled words for years now, and earlier this year I tried using it to analyze for trends in misspellings so I can focus my instruction. Aim for the 80-20 Rule. What a wonderful tool! GPT has also helped improve writing prompts.

But when my kids use Grammarly... Don't get me started how too much AI for some tasks cuts my students off at their knees!

So yeah. Total hypocrite, I am.

Expand full comment
author

I think this is a dilemma a lot of us struggle with :) I think though that there's a world of difference between not encouraging AI use because you are aware of the consequences and have specific goals you are aiming for, and not encouraging it because you are scared of what you don't know.

Expand full comment

The ChatGPT shaming seems to boil down to:

1) ignorance

2) fear

3) the human love of melodrama

And....

A great many critics of AI haven't figured out yet that the AI transition is going to happen no matter what our opinions on AI might be. From that position of ignorance they are enjoying the wishful thinking illusion that our perspectives on AI matter. And so they argue this or that position as if a debate would make some kind of difference.

The question becomes far simpler when we come to understand that nobody on Earth can stop the further development of AI. If every company in America's silicone valley were to walk away from AI, other nations would happily continue to push the technology forward.

We aren't drivers of the AI bus, we are merely passengers. Once that is understood and accepted, complaining about AI begins to seem much like yelling at the weather.

Expand full comment

Eventually AI writing assistants will become invisible to us as they are incorporated into our tools. There’s a lot of unfounded fear and ignorance about AI. Still, I think early adopters are reaping the benefits through efficiency and productivity.

Expand full comment

In writing Fiction, There's a HUGE shaming on this topic from very vocal and polarized personalities. I often point out exactly what you articulated in that we already have AI baked into almost everything and it can be a huge asset without replacing human creativity.

For example, writing the back cover blurb for a book is one of the single hardest tasks for an author. Distilling a book down to a few paragraphs that can capture an audience is tough. Whereas most of my chapters have 4 versions at most, my back blurb has 15+ I finally ran a good copy through ChatGPT to see if it could help and it has some good adds, and some things it got wrong. But it did help.

But back to the humans in a panic. Most of them don't recognize how Artists also copy shamelessly. They just call it inspiration. Ironically, it's one of the most common interview questions for authors "Who inspired your book." Another way to read that is "whose work did you emulate (copy) in style, concept, world building, etc. to make your work faster and more accessible?"

I wrote about that in this essay on "AI Is[n't] Killing Artists"

https://www.polymathicbeing.com/p/ai-isnt-killing-artists

Expand full comment

I think this is an interesting piece. I'm not sure that shaming is the right framing for it. Though I don't have a better word for this, to me shaming has some other implications, such as intentional cruelty and bullying.

I also wonder if "human in the loop" gives us the right perspective. I don't know where I ran across this, but maybe we should think in terms of "AI in the loop," at least for some things. We are the ones with agency. For most tasks, current AI either interacts with us, complements and supplements our abilities, or extends our agency. The kind of use your friend describes, using it for editing and polishing begins and ends with a human. It is the AI that is inside the "loop." That may sound trivial but small changes in framing can have larger effects.

The other thing that struck me about this is the assumption that it is always about lack of knowledge. There are sometimes other aspects of AI use that may enter into reactions to AI use, accusations not of cheating but of irresponsibility. These hinge more on the way it's commonly deployed these days, than to something inherent in AI. I have heard more than one discussion of savvy professors who use AI extensively, in which they question whether it is safe to upload their unpublished writing, grant proposals, etc. into an AI. These include people who are well-versed in AI, even at the technical level. Until those kinds of concerns are addressed, and in ways that everyone can afford, that's going to be a problem.

It is not always a case of misunderstanding how AI works, and what it can do, it is also possible that the mistrust of the companies, and the ways they have deployed their products, is a real stumbling block. Given the cost of secure AI, this is a real issue.

Expand full comment
author

Thanks Guy.

I agree there's lots to think through here -- and I wanted to start a conversation as much as anything. I suspect that shame has, as you say, connotations that don't fit here -- I was thinking more about the feelings that being called out as a cheat brings out in someone who thought they were behaving appropriately.

The lack of knowledge issue is also complex -- I set out to try and be more. nuanced here, but it messed up the narrative. I've definitely seen instances of people with strong ideas about generative AI who really don't understand how it's being used and even what it can and can't do. But there are also plenty of people who know what they are talking about who have reservations -- not that this is an excuse for making people feel bad if you disagree with them

Expand full comment

Andrew, I also wonder how this plays out at different institutions. I know you are at ASU but don't know if your colleague and their co-author are there as well. How much depends on whether an institution has deeply invested time, effort, and resources into GenAI as ASU and Florida have done, as opposed to schools that are just beginning to move in that direction?

Expand full comment
author

Hard to tell -- at ASU there's a very strong push to embrace generative AI, but there are also a lot of faculty who don't feel part of the conversation and are suspicious of its use. I suspect the same applies elsewhere.

Expand full comment