20 Comments

Andrew Maynard writes....

"At the same time, I think that a more nuanced understanding is needed of how people and society work — and what defines us and gives meaning to our lives — would be helpful in thinking about what an accelerated AI future might look like; if only because at the heart of every failed technological dream and unexpected technological turn, there are people behaving as people are wont to do."

This sounds wise to me. And how do we humans behave? We take any new power given to us, and use it both for good and for ill. Everybody gets further empowered, both those with noble intentions and those with evil plans. And, as the scale of power available to us grows, the price tag for well intended mistakes does as well.

Instead of focusing on particular technologies as is so common, we should instead be taking a step back and shifting our focus to the knowledge explosion as a whole. We should be asking larger questions such as...

1) How much power can adult human being successfully manage? This is the very same question we routinely and sensibly apply to children, but then typically forget to ask of ourselves as adults. https://www.tannytalk.com/p/the-logic-failure-at-the-heart-of

2) How fast can society successfully absorb new technologies? Society may be able to adapt well to a technological change at one rate, but a faster rate of adoption may introduce too much change too quickly, and result in social calamity. People need time to adapt. More and bigger changes delivered ever faster is not automatically good.

A key concept to keep in mind is that as the scale of power available to us grows it becomes ever more possible for the threatening aspects of a new power to erase all the benefits. An obvious example here is nuclear weapons, which we seem to have learned almost nothing from.

As further example, imagine that super AI reaches and exceeds in delivering the revolutionary positive benefits predicted by AI's most ardent supporters. Poverty is eliminated, cancer is cured etc. Fantastic!!! But...

None of those benefits are going to matter too much if, for example, the game players on Wall Street misuse AI and manage to crash the global economy resulting in WWIII between the great powers. As a reminder, the game players on Wall Street played a central role in crashing the economy in the early 1930s, resulting in WWII, the biggest catastrophe in human history up to that point.

Don't listen to what the Silicon Valley tech bros say about the benefits of super AI. Ask them instead how they plan to manage the downsides, because if the downsides of any great power are not successfully managed, the benefits provided by that power may turn out to be irrelevant.

As you ask about managing the downsides of coming AI developments, keep the following in mind.

The only way to manage the future of AI development is to be able to manage those human beings around the world who are developing AI. And....

That's not possible. Ask the Silicon Valley tech bros how they intend to manage Putin's use of AI.

Expand full comment

It's really challenging to reconcile the motives of a for-profit company that sees a marketing benefit to calling itself a "public benefit corporation" presumably for their own self enrichment without any appreciable public benefits. OpenAI is guilty of this as well and one day I might wake and see the game that it is, which is to get money and do something useful with it.

Expand full comment

Full disclosure: find "reasoning" and context window of Anthropic models and I use them extensively but also just within the past 24 hours I think it has censored and refused to respond to 2-3 prompts via its own API (but nevertheless charged for the queries). This seems symptomatic of greater societal ills that someone unknown to me decided what was good and bad for me and I don't get a voice for my own benefit other than to use a different model, which will give me that information.

Expand full comment

Thanks for the review. When you invent a ship you invent a shipwreck. When people start using AI in sufficient number of interlinked systems this will produce normal accidents. I just love Charles Perrow.

Also, as a professional in a field of mental health with a background in general medicine I am very, very skeptical about AI being able to “fix” anything. Few people even recognise what is broken in health itself and in healthcare as a system so hoping to fix something is just as naive as techbros’ attempts to cure cancer by messing with “human code.” Big hopes, big disappointments.

AI can be quite useful though. A well trained AI can listen to therapy session and give a live supervision. But the client has to give consent for his very, very personal feelings and thoughts to be fed to a machine he has no way to control. And that’s just one possible application.

Expand full comment
author

Great call out to Perrow! And interesting of course that he considered unexpected adverse outcomes as part of the landscape around innovation -- which we should probably take as a warning that bad stuff's going to happen!

Expand full comment

And, as the scale of powers available to us grows, the inevitable bad stuff that happens grows too.

Expand full comment

Thank you for a wonderful exposition and critique (and, of course, all futurists of the humane (pro-civilization-pro-culture) brand see Banks as their guiding principle).

A few issues I see relate to the main topic of human messiness. The question of ‘solving’ mental (emotional/cultural/religious, etc.) issues is fundamentally problematic- obviously, it’s not about educating (though the benefits of an educated mind are underestimated). It is fascinating to read how many of the well-networked minds involved in the tech world think on parallel lines (the latest from Vinod Khosla – AI Dystopia or Utopia from Sept 20 is an amazing piece and worthwhile reading). https://www.khoslaventures.com/ai-dystopia-or-utopia-summary/

To different degrees, we can see that all of them carry the idea of ‘re-inventing or re-defining’ what a human is.

How (and more importantly, why) will such a human be, with what values and direction of evolution is the primary question we need to address; that technology and AI, in particular, are disruptive is not in question; what is, is what kind of human we desire to become and what is the roadmap to reach that desired goal? (Does this future include all humans? All sentiency?) see my latest Vast Minds.

https://tygerac.substack.com/p/minds-vast-minds-we-need?r=qirvq

Expand full comment
author

Thanks Tyger - and thanks for the link!

I'm not sure this is that popular a perspective (at least in some circles) but I think there's a need to create space for more positive/exploratory thinking about what it means to be human in a future where conventional boundaries and constraints are removed, rather than fighting to preserve what we currently assume to be immutable.

It's the creative space of possibilities that I often hanker after -- not because I think we should or even can change what fundamentally makes us us, but because its dangerously myopic to conflate what we experience now and how we define "human" as a result, as something that should never be questioned.

Expand full comment

His opt-out problem was chilling to me, in spite of generally liking Dario. It echoes the idea that if even anyone rejects a highly nonhuman AI world, they are a problem and must be "solved."

He means well, but its a concerning dogma when it echoes into a final conclusion of "surely most good visions must be similar to this."

It jumps basically to the idea that if Amish exist and refuse to use vaccines, they have to be stopped.

Expand full comment
author

As you'll see I also had issues here -- but this, to me, is a point where discussions and thinking need to be opened up in a constrictive way, I think that this is what Dario is beginning to do here.

Expand full comment

Does it matter though? I'll email you but it feels like a bigger issue that I am not sure if our opinions matter at all now, beyond scraping and pleading that the new technolords might have sympathy for us.

It feels like at some point, power and autonomy has been conspiously robbed from the individual.

Expand full comment
author

Looking forward to it -- society desperately needs research, teaching and thought leadership that transforms the landscape here, and we are standing ready to take a lead if the funding was there -- but near-impossible to get funding for what's needed rather than what narrowly focused agencies, foundations etc. will invest in

Expand full comment

I'm not sure the evidence agrees with you because that's exactly what I've been doing for the better part of a year through my nonprofit and website q08.org. And because I don't have a marketing team, I'm not getting views. Reasoned thought leadership is out there for anyone who searches, but it's not well funded, it doesn't have a VC check behind it pushing for a profitable product in return.

Expand full comment

It's good to read something balanced. There is a ton of opportunity for advancement. It just takes good honest critique to make sure we don't lose our way.

Expand full comment

I want someone to start talking about the energy that is going to be required for all this AI and where it's going to come from. And I would like to know that these AI techdudes are thinking about it. Because you can't enjoy your utopia when the natural world is out of control.

Expand full comment

Nuclear energy is a great place to start especially with the evolved technology and micro-nuclear reactors.

https://www.polymathicbeing.com/p/nuclear-meltdown

Expand full comment
author

Yep -- I think the energy vs progress conversation is complex, but there are not enough people (or organizations) in the energy transitions and sustainability/climate change talking about this seriously -- touched on briefly last week: https://futureofbeinghuman.com/p/the-double-or-nothing-bet-on-ai-fixing-the-climate

Expand full comment
Oct 13Liked by Andrew Maynard

I think the tech “visionaries” ignore this subject at their own peril.

Expand full comment
Oct 13Liked by Andrew Maynard

Well said.

I, too, loved the shout out to Banks. When people ask me what a future with "powerful" AI could look like, Banks' Culture is my go-to fictional utopian exemplar (as much as I enjoyed reading cyberpunk literature in my youth, I never thought "I'd love to actually inhabit this universe which has so clearly been written as a cautionary example.")

Expand full comment

For another reference to the Machines of Loving Grace, check out https://medium.com/@technoshaman/machine-love-is-coming-to-a-screen-near-to-you-e1fd13fd08b2

Expand full comment