4 Comments

Thank you for this thought-provoking essay. I just finished reading Yuval Noah Harari’s latest book “Nexus”, which provides a deeper historical context to understand threats posed by such AI agents. Strongly recommended.

I look forward to following your thoughts on this topic.

Expand full comment

The fact that somebody was harmed by a chatbot needs to be put in to a larger context. How many people were helped vs. how many people were harmed? Tens of thousands of people are killed each year in car accidents, with many more injured. But based on a "helped vs. hurt" analysis, we've concluded cars are worth the steep price involved.

Next, there really is no way to stop the further development and use chatbots. Even if all chatbots were made illegal in America and Europe, that's only about 15% of the world's population. We don't have the power that is so often assumed in these kind of conversations. As example, many drugs are illegal in very many countries. The world is still flooded with those drugs. If someone wants to buy it, someone else will be willing to sell it, and a way will be found to conduct the transaction.

All we can really do is shatter the myth that we can control the further development of AI, and if that reality is grasped, we may become more sober with future technologies.

Expand full comment

Yes I just did a post on that. The standard playbook, create addictive tech, users become addicted, blame the user for being vulnerable, which they may well not have been, wheel out the academics (here ex Google from Stanford) or pay for your own research to say it’s all fine really, and put that about widely. result: nothing then happens. Business as usual.

https://www.linkedin.com/posts/hilary-sutcliffe-01235220_ai-activity-7255487007307579392-oIW9?utm_source=share&utm_medium=member_ios

Expand full comment
author

Thanks Hilary -- important post (especially with the focus of your work on the addiction economy), and don't know how I missed this when you posted it -- algorithms!!

Expand full comment