5 Comments

Andrew says: "Practically, one possible way forward (by by no means the only one) is to rapidly convene a world congress of diverse leading experts, thought leaders, and decision makers, "

Er -So invite roughly the same self-referential elite thought-bubble that are currently driving (or losing control of?) hi-tech innovation trends? What's that hackneyed Einsten quote about using the same kind of thinking as when we created problems?

One of the big problems about the AI governance story as articulated by the Future of Life letter is that it fully invisibilizes the already present problems and threats of sub-GPT4 algorithmic decision making and non LLM AI (for people of colour, famers, low income and marginalized people for example). It does this by locating the 'AI problem' in some future 'existential risk' scenario that also threatens white men of privilege and therefore should now be taken seriously when rampant racist algorithms and oligopoly control over data to nudge behaviours was considered no big deal.. Thats why someone like Elon Musk can happily sign this letter - it doesn't challenge power one iota.

Convening a meeting of the great-and-the-good (even with a multistakeholder sprinkling of hand-picked NGO's for optics) will just continue this invisibilization of AI's current and real harms and strengthen elite power interests over AI . How about instead a democratically convened and governed set of people's assemblies moving towards a societal set of proposals through citizens juries for example - like the bottom up process that was behind the abortion referendum in Ireland. Then a moratorium isn't just a meaningless x number of months to hold back the opposition, but conditioned on a genuine societal discussion and engagement.

By the way great to see you on substack Andrew .. here's mine: http://www.scanthehorizon.org :-)

Expand full comment
author

Ah, this is why I value your opinion so much Jim -- it brings a much needed balance to my oft-blinkered perspectives!

Yes, of course -- there's a world of persons, communities and futures than need to have a voice in the process here. My one worry is that there's a very real risk of getting so bogged down in process that the tech (and the people pushing it) far outstrip the conversations that could help modulate innovations in socially responsible and beneficial ways.

Expand full comment

You write, "Large Language Models (LLMs) hint at a deeply complex risk landscape that we are, at this point, unprepared to navigate."

The history of nuclear weapons should demonstrate that we're unlikely to be prepared to navigate an AI future for at least decades. It's been almost 80 years since Hiroshima, and we still have no clue how to get rid of these things. No clue.

QUESTION: What are the compelling benefits of AI which justify taking on another substantial risk at a time when we already face so many others?

I've been asking this question repeatedly everywhere I go in AI land, and no answer yet.

Before we proceed with artificial intelligence we should first see if we can discover human intelligence. It must be out there somewhere...

Expand full comment

The intergouvernemental organisation OECD has set the Principles for responsible innovation in AI and Neurotechnologies.

https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449

https://www.oecd.org/sti/emerging-tech/recommendation-on-responsible-innovation-in-neurotechnology.htm

Expand full comment
author

Thanks -- yes, there are a lot of sets of principles around that make a great starting point. The IEEE approach to Ethically Aligned Design is another good resource: https://standards.ieee.org/industry-connections/ec/ead1e-infographic/

Expand full comment