14 Comments

Fantastic read, thank you for sharing, provided me a few different lines of thought.

Creator / Amanuensis:

The positioning of creator and amanuensis is interesting, I can see that as a possibility but also see hints of a more collaborative future ahead, where it's not just ‘conceptualizing and writing’ but rather an interplay between the human and AI to bring to life new works, potentially working with each other in real time (truly collaborating) to create the outcomes.

Most of the digital creators I know already use a simpler form of this, in that they use the AI to create an initial draft of a concept, and then the human will take control to refine it or change it to an outcome. But it seems that is only taking a baby-step towards a more iterative and collaborative future with AI.

Human or AI Tools:

Simplistically, I like to use Cattell’s theory of fluid and crystallized intelligence, to create a starting point for drawing a distinction between human and AI uniqueness and capabilities. If this is used as a baseline, it seems plausible that there are some tasks that are not achievable with our current AI and are therefore more likely to be a human task.

It seems that fluid or crystallized may be too gross of a classification however, for a lot of scientific work, as an example both photons and gravity were known constructs for a long time, before their interactions were hypothesized / discovered (by a human).. and due to the ‘distance’ of their relationships (macro vs micro), its improbable that they would have been able to be discovered by AI. Perhaps this is an area that needs to be explored more deeply in order to really understand the impact to the future of work and where humans and machines (may, should, could, will not) perform roles interactively or separately.

Deeply appreciate the thought exercise you shared, great insights and observations to ignite deeper thoughts.

Expand full comment

Clear and thoughtful! The dynamic of primary over AI over human to be honest feels like my internal state when asking AI for anything that requires verification. For example, asking ChatGPT to propose a few ways to solve a coding task--now it's me reviewing and potentially testing each of the AI's proposals. Not as insidious as your example organization, of course, but a situation where I've decided, however momentarily, to take instruction from a text extruder.

Expand full comment

Maynard writes, ".....we should probably be thinking about whether we want such a future and what we should be doing if we don’t."

Here it seems useful to reference earlier automation transitions, like the mechanization of agriculture and the use of robots in factories. Did the people who lived through those transitions really have a choice? I don't mean just the worker class, but the entire system. What company or country had the option to decline the the mechanization of agriculture?

The future of AI will be determined by people all over the world, most of whom we in the West have little to no control over. Is all the talk about what AI should or shouldn't become mostly a fantasy of control that we don't actually have?

Expand full comment

I've pinged you but I've been working with an university on AI safety and just ran a workshop last week with them; sent the photos. I think we should connect with them and see how we can negotiate this in a way that is ethical.

Expand full comment

I think we ought to also factor in the innate human resistence to authority which might make them less-than-ideal amanuences especially when the ideas are proposed by a non-human authority.

Expand full comment

Your friend Mary sounds like an optimist! Human-as-amanuensis could be read as the bargaining phase of grieving the end of an anthropocentric intelligence paradigm.

Yes, many current applications of AI are still very "manual" and require a human in the loop. But as the era of agents is upon us, we're seeing more of this:

https://metr.org/blog/2024-11-22-evaluating-r-d-capabilities-of-llms/

"In general, we find that AI agents perform better than humans at AI research engineering tasks when both are given 2 hours, but they perform worse at higher time budgets."

There's no shortage of loud voices still working through the denial and anger stages of grief. "Nonhuman entities will never be able to do [X]!" Perhaps some subset of them are correct. But the observed pattern so far is that progress continues to chip away at hubris.

Should AI research engineers hang their hopes and dreams on the improbable outcome that we never progress beyond the "2 hour" complexity barrier? Or might it be wiser to consider having a dialogue about what happens when/if we scale to, say, the "1 year" barrier?

Expand full comment

I think you're channeling what was implicit in the article Mark 🙂 – to me the concept of human labor in service of AIs is pretty disturbing, and one that it's becoming easier to envision as this thought experiment shows.

Expand full comment

Feels like almost everything about AI is disturbing to be honest. The Future of Being Human is wehterh we survive at all.

Expand full comment

Oh, indeed! I'm just pulling on the thread you so beautifully weaved into the fabric of your narrative. And I'm pulling on it because the point you make is one keeping me awake at night.

Expand full comment

You may be interested in reading a study made by Italian Central Bank about “An assessment of occupational exposure to artificial intelligence in Italy”.

A first finding is :

Overall, the different methodologies agree on the fact that occupations requiring cognitive skills are more likely to be exposed to the introduction of AI, a marked difference compared to the introduction of robots, one of the most recent waves of innovation observed.

And moreover:

Observably, AI technologies have brought advancements in all those activities (perception, handling with dexterity, content generation, social interactions) that were previously considered inherently human and at low exposure. In fact, some of the occupations with low exposure to earlier automation technologies now appear in the medium or high exposure groups, albeit with different degrees of complementarity (e.g. mathematicians or human resource managers). Other occupations, mainly related to physical strength, were highly exposed to some of the previous technological waves, but appear only mildly exposed to AI.

You find the full paper in English at this link:

https://www.bancaditalia.it/pubblicazioni/qef/2024-0878/index.html?com.dotmarketing.htmlpage.language=1

Expand full comment

Thanks Franco!

Expand full comment

Now I'm going to be thinking about this all day! Wondering how it would play out in an organization over time. Like, would the primary agent who presumably is responsible for several categories of work have a separate AI for each? Or one AI that directs the work of several humans (who as you show also use AIs to assist them)? Do those AIs interact with each other? What's the equivalent of a 1:1 meeting when the primary agent needs to touch base with the AI? What percent of time/energy do the human-to-human communications still require?

Thank you especially for the drawings. Very helpful to understand the progression of models.

Expand full comment

Thanks Margie! I think there are probably many configurations that are possible -- although as Mark Daley notes in the comments, a good many of these are probably not desirable!

Expand full comment

Really enjoyed reading this thought experiment, which especially resonated with us with regard to the creative skills which are uniquely human and relevant for innovation and progress at a societal level. We would love to explore with you what it means to be human in a human-AI symbiotic relationship...

Expand full comment