The incomparable messiness of the provenance of ideas
Connecting influences to ideas isn't always easy — and this has just got harder with the growth of generative AI
I’m appallingly bad at recalling how my thinking has morphed and evolved over time, and what or whom has influenced this process. As a result, I’m never quite sure how who I’ve spoken with, what I’ve been exposed to, or which serendipitous encounters I’ve had, have contributed to my ideas.
I suspect that most people feel the same. But this struggle with the “provenance” of my thinking and ideas has been bugging me — especially as the debate around the provenance of the outputs from generative AI continues to grow.
It’s hard to avoid the growing debate around how we think about sources and attribution when using generative AI — the recent piece by Alex Reisner’s in The Atlantic on the complex use of copyright material in training platforms like ChatGPT captures some of the challenges and complexities here. And to be clear, there are important questions that need to be addressed when it comes to using people’s work without permission with generative AI.
Yet on some level, the assimilation and use of vast amounts of information feels very close to what we all do as we transform a lifetime’s exposure to a myriad conversations, books, movies, images, experiences, and more, into something of our own. And this got me thinking about the threads that influence my own work, and how these are made visible and acknowledged — or not, as is sometimes the case.
The messiness behind my own “provenance of ideas” was brought home to me quite unexpectedly recently, as I was revisiting a chapter in my 2018 book Films from the Future for the Moviegoer’s Guide to the Future podcast.
Back in 2021, I was part of a small group of advisors working with the Canadian Institute for Advanced Research, or CIFAR, on a new call for proposals. Together, we landed on the framing of “The Future of Being Human” as one that would stimulate creative, cross-cutting, and visionary ideas.
As COVID continued to dominate life around the world, the essence of this call was captured well by CIFAR’s then-Vice President for Research Mark Daley as he wrote:
“The parts of the world that are emerging from the pandemic this year are mostly wealthy, industrialized nations, representing only a fraction of the world’s population. We know that most of the world is not emerging from the pandemic anytime soon. That fact alone is a powerful exemplar of the disparity in our global society. It’s a poignant and powerful reminder that asking about the future of being human necessarily involves questions about governance, about human rights, and the commitments we have to each other as fellow travelers on this planet. ”
Did I do the equivalent of ChatGPT and regurgitate other people’s work as my own?
I’m pretty sure I didn’t — although I did check in with Mark about what we were doing as we were finalizing the Future of Being Human initiative, just in case I’d inadvertently coopted something that didn’t belong to me. But I did have my niggling doubts. Had those conversations led to a course of action that wouldn’t have occurred otherwise? And if so, is my work around the future of being human really mine?
As it turns out, my thinking around the future of being human predated the CIFAR call for proposals by some years — it’s just that the threads connecting the many different influences, insights and ideas swirling around my head were neither linear, nor obvious to me.
This became clear to me as I was revisiting the chapter in Films from the Future that is inspired by the 1996 movie Ghost in the Shell. This is a film that grapples with the nature of personhood in a technologically complex future, and I used it as a platform for exploring in some depth the future of being human.
The thing is, I didn’t recall this as I was working with CIFAR a few years later. It’s embarrassing to admit, but I’d even forgotten that the chapter was subtitled “Being Human in an Augmented Future”! The underlying ideas I was exploring then clearly stuck in my mind and continued to evolve. They infused our conversations around the call for proposals. But the bright line of provenance between influence, ideas, and outcomes, was lacking — despite it sitting there in black and white on my bookshelf.
To my further embarrassment it gets worse. When I look back through my writing over the past several years, it’s clear that my thinking around the future of being human has been growing and evolving for some time. It was one of the reasons why I selected Ghost in the Shell for Films from the Future in the first place! Through this somewhat unexpected retrospective it became increasingly obvious that “artifacts” such as Films from the Future, my more recent book Future Rising, the ASU Future of Being Human initiative, this Substack, and even the CIFAR call for proposals, are merely the visible fruits of a messy and largely hidden network of influences, encounters, thoughts, ideas, and explorations that’s been growing for decades.
It’s this messiness that got me thinking about the concept of a “provenance of ideas” more deeply, and whether it’s ever possible to chart in any linear or documented fashion the origins of an idea and the factors that have influenced its evolution.
Here I should admit that I’m not that comfortable with the phrase “provenance of ideas”. It comes with a whole bunch of philosophical and ideological baggage that detracts from the simplicity of the concept of tracing the sources and evolution of an idea.
It also implies that attribution is somehow possible. This may be true where there is a clear cut process of building ideas brick by brick on the thinking and insights of others — especially where these are codified in writings, images, or other forms of media. And, of course, this is part of the bedrock of academic scholarship, intellectual property, and ownership over ideas — and something that my academic writing reflects extensively.
But how about those situations where the true provenance of ideas draws on a tangled, impenetrable, and deeply non-linear set of chance meetings, random inspirations, serendipitous encounters, and the echoes of long-forgotten conversations?
This messiness far more accurately reflects the origins and evolution of my thinking around the future of being human. It’s something that’s grown almost organically as conscious and unconscious influences have sparked half-formed ideas which have, in turn, crystallized and grown into something more coherent over time.
What is clear is that the process of how I got to where I am in my thinking is something of a black box — even to me.
This will not come as a surprise to anyone who studies how we construct knowledge and understanding within the deeply complex and interconnected social and technological worlds we inhabit. But it still begs the question of how we attribute and respect the sources of our ideas, or whether this is a rather outmoded idea — especially as generative AI is increasingly a part of the mix of influences that inform and modulate our thinking.
What I needed was a more fluid way of framing the evolution of ideas from a complex web of influences, exposures and experiences. And so, in a very self-conscious twist of irony, I turned to none other than ChatGPT.
The ensuing conversation was helpful as a sounding board, but, as you might expect, it was somewhat limited intellectually. However it did throw up a few intriguing alternatives to linear and ownership-based approaches to capturing the evolution of ideas.
These included concepts like “Ideation Tapestry,” Mental Mélange,” “Cognitive Archipelago,” and even “Brain Brew!” And amongst these was the rather novel concept of an “Ideas Rhizome”.
ChatGPT defined this as drawing on the work of Deleuze and Guattari (especially their book A Thousand Plateaus) to represent “ideas as non-hierarchical, interconnecting networks, where any point can be connected to any other point.”
Deleuze and Guattari’s work around their philosophical concept of a “rhizome” as a non-linear and non-hierarchical interconnected network of knowledge, culture, and more, is intriguing — especially when applied to the evolution of ideas. But like the concept of the provenance of ideas, this also comes with plenty of baggage — not least in how it extends a biological metaphor in overly confident yet intellectually convoluted ways to the social construction of meaning. (Some readers may recall the critique of writers like Deleuze and Guattari in the 1990’s for their postmodernist appropriation of phenomenon from the natural sciences).
Nevertheless, I like this idea of a complex and messy mass of threads and interconnections from which ideas arise. It seems to reflect the decidedly non-linear and often opaque ways in which my own brain works — and I suspect how most people’s do. I wonder, though, whether a better metaphor is fungal mycelia.
These are the complex and intertwined systems of thread-like “roots” that form the matrices from which “fungal fruits” like mushrooms arise. They organically spread through their environment: probing, connecting, interrogating, assimilating, and ultimately transforming what’s around them.
Of course, this is already pushing the metaphor farther than is probably wise, as any mycologist worth their salt will realize. But I do like the imagery of a multidimensional and largely impenetrable web of influences that leads to the occasional emergence of coherent ideas.
It’s also an image that allows the provenance of ideas to be murky, complex, and hard to trace, and one that recognizes the importance of collective and intertwined systems in ideas formulation and the creation of knowledge and understanding.
This doesn’t necessarily help in how to attribute influence and inspiration when there’s not a bright line between someone else’s ideas/work and your own. But it does help to make sense of the highly complex landscape here.
And this brings us back to generative AI. Just as I struggle to see and understand the provenance of my own ideas, the situation we find ourselves in with generative AI seems to mirror this — at least in part. A large language model like GPT4 is exposed to myriad “artifacts” which, together, form a mycelium-like mat of associations, connections, and inferences. With a few exceptions, the responses from ChatGPT and other generative AI platforms are the fruits of this “ideas mycelium,” with very little direct traceability as to what has influenced or otherwise contributed to the output.
And as such, the provenance of ideas here lies not in the specific sources and threads, but in how they form a complex matrix of, for want of a better word, stuff.
Of course, who owns this stuff and who authorizes its use in the model are important — as is the breadth and diversity of the stuff itself. Just as what I am exposed to influences and biases my thinking, so the composition of the “stuff” that generative AI is built on leads to biases and influences in its outputs. And this means that great care needs to be taken to understand and, where necessary, oversee what goes in to forming generative AI’s “ideas mycelium.”
Yet just as I struggle to map out the lineage of much of my own thinking, maybe we need to accept the increasing messiness of the provenance of ideas in an age of generative AI — whether in large language models or in our own heads — and recognize that what is important is the shape, structure, and composition of the “ideas mycelium” that leads to new understanding, rather than increasingly hard to identify bright lines between input and output.