Discussion about this post

User's avatar
Michael Woudenberg's avatar

In writing Fiction, There's a HUGE shaming on this topic from very vocal and polarized personalities. I often point out exactly what you articulated in that we already have AI baked into almost everything and it can be a huge asset without replacing human creativity.

For example, writing the back cover blurb for a book is one of the single hardest tasks for an author. Distilling a book down to a few paragraphs that can capture an audience is tough. Whereas most of my chapters have 4 versions at most, my back blurb has 15+ I finally ran a good copy through ChatGPT to see if it could help and it has some good adds, and some things it got wrong. But it did help.

But back to the humans in a panic. Most of them don't recognize how Artists also copy shamelessly. They just call it inspiration. Ironically, it's one of the most common interview questions for authors "Who inspired your book." Another way to read that is "whose work did you emulate (copy) in style, concept, world building, etc. to make your work faster and more accessible?"

I wrote about that in this essay on "AI Is[n't] Killing Artists"

https://www.polymathicbeing.com/p/ai-isnt-killing-artists

Expand full comment
Guy Wilson's avatar

I think this is an interesting piece. I'm not sure that shaming is the right framing for it. Though I don't have a better word for this, to me shaming has some other implications, such as intentional cruelty and bullying.

I also wonder if "human in the loop" gives us the right perspective. I don't know where I ran across this, but maybe we should think in terms of "AI in the loop," at least for some things. We are the ones with agency. For most tasks, current AI either interacts with us, complements and supplements our abilities, or extends our agency. The kind of use your friend describes, using it for editing and polishing begins and ends with a human. It is the AI that is inside the "loop." That may sound trivial but small changes in framing can have larger effects.

The other thing that struck me about this is the assumption that it is always about lack of knowledge. There are sometimes other aspects of AI use that may enter into reactions to AI use, accusations not of cheating but of irresponsibility. These hinge more on the way it's commonly deployed these days, than to something inherent in AI. I have heard more than one discussion of savvy professors who use AI extensively, in which they question whether it is safe to upload their unpublished writing, grant proposals, etc. into an AI. These include people who are well-versed in AI, even at the technical level. Until those kinds of concerns are addressed, and in ways that everyone can afford, that's going to be a problem.

It is not always a case of misunderstanding how AI works, and what it can do, it is also possible that the mistrust of the companies, and the ways they have deployed their products, is a real stumbling block. Given the cost of secure AI, this is a real issue.

Expand full comment
11 more comments...

No posts