4 Comments

It makes sense to respond to some immediate challenges without waiting for greater certainty about AI generally; one would not, for example, want to let identity theft run rampant, or disregard obvious environmental issues arising from, e.g., Bitcoin technology, while one sought a deeper understanding about generative AI. I think the caution comes in because many would-be regulators fail to differentiate between specific and addressable challenges, and broader policy implications of AI.

Expand full comment

Interesting comparison in this analysis. Though I'd say there are many of the risks of AI technologies that are known. Exponential disinformation, fraud and identity theft, and child porn. Fake intimacy, automated bioweapons, destabilisation of nation states and disrupting social cohesion and trust etc etc...then there is the environmental footprint and supply chain externalities. We can acknowledge that there are policies, laws and regs in place to deal with everything from consumer and data protection to anti-trust and the environment. Monitoring and enforceability are always key considerations (systemic problems of regulatory capture aside). These technologies do introduce new categories of risks and at a pace that is deeply concerning. But I agree that the patterns of old governance and top down bureaucracy are not up to the challenge. I'd argue this goes for many problems interwoven with our metacrisis. So we need models of sense making that are more networked while also having governance that is more decentralised and distributed harnessing collective intelligence.

Expand full comment

Thank you for this analysis, with which I largely agree. I suggest two international institutional arrangements that could be useful. First, the UN could sponsor technical working groups that look at risk assessment across various application areas (social media/communications, medicine, etc.). Second, I would like to see an off-the-record forum similar to the Pugwash conferences that nuclear scientists held during the cold war. The focus would be on scientist-to-scientist discussion of risks and mitigation measures, particularly in the military domain.

Expand full comment

Climate change computer models (pre-AI?) have been around since the '70s but did not trickle down into a working solution, due to political-economical expediency that perverted its evolution? The IPCC was a fig leaf to cover it and to call for a transformative change that can't happen in that context?

Expand full comment