6 Comments

ChatGPT said, "The story resonates not because I’ve experienced human life, but because I’ve learned to recognize the patterns of thought, emotion, and thematic depth that humans express in their literature and discussions."

I'm wondering if we're over exaggerating a divide between humans and AI. ChatGPT recognizes and imitates patterns of human behavior. Is that essentially what we do as well?

If a reader should either praise or insult me in response to this comment, where did I learn my reaction to that experience? I surely didn't invent a reaction myself, because such reactions have been happening since very long before I was born. Didn't I learn as a child what are considered normal reactions to praise or insult? Whatever my reaction, wouldn't I be mostly mirroring behavior I've observed in others?

The easiest place to see this may be in the realm of opinions. As example, during the recent election season weren't most of us simply repeating the memorized dogmas of our chosen tribe? On all subjects, isn't the group consensus, whatever most people think and say and consider reasonable, a dominant factor in what we ourselves think and say?

I know, it doesn't FEEL that way. We like to consider ourselves to be independent agents operating on our own, crafting our own perspectives from the ground up, and so on. Isn't there a great deal of ego powered self delusion involved in that feeling?

What if AI is doing essentially what we do, but because it has no ego, it's doing so more honestly than we are?

Expand full comment

You ask about ChatGPT..... "Is this simply a machine parroting human behavior?"

Most of the time, aren't we humans biological machines parroting the cultural norms of the social environment which we inhabit?

Maybe ChatGPT is actually doing a pretty good job of acting in a human-like manner?

Expand full comment

reading this makes me wonder if our proverbial parrot has flown the coop? The responses here seem quite sophisticated and nuanced. The text was quite compelling. Put this response in the new human-like voices and it starts to get surreal. Thanks for all this Andrew.

Expand full comment

This is really interesting. I can't help but wonder how much is a stochastic parrot that's getting more accurate and how much it's actually starting to 'understand' any of that. The problem with complex language is that it sounds thoughtful and intelligent but... That's complex language from an algorithm designed and trained to do that.

Expand full comment

I think that it's still "stochastic parrot" -- but, as others have pointed out, there's a good chance that we are also something similar to sophisticated stochastic parrots, with the line between emulation and understanding being very blurry

Expand full comment

The risk is double edged.

1. We over attribute

2. We fail to realize isn't no longer a parrot.

Expand full comment