The Artisanal Intellectual in the Age of AI
How is advanced artificial intelligence forcing a rethink of the value of human intellectual labor?
I must confess that, since getting my hands on OpenAI’s Deep Research (not to be confused with the just-released Deep Research feature on Perplexity), I’ve been intrigued by how it’s forcing me to rethink what it means to be someone who makes a living by thinking.
Last week that led to me exploring Deep Research’s ability to research and write a complete PhD dissertation — unaided, apart from some final formatting. While the resulting ~400 page document fell short of matching up to a true scholarly dissertation it was, nevertheless, impressive.
But AI-only “intellectual labor” is — at least at the moment — less interesting to me than what a person and a powerful reasoning/research AI might be able to achieve together.
And so I set myself the task this week of seeing what’s possible when I combine my own “intellectual labor” with OpenAI’s Deep Research.
The result — and my notes on the process — can be found below, along with an intriguing extra if you get to the end of the notes.
Of course, people have been augmenting their own abilities with generative AI for ages now, so you might be thinking there’s nothing new here. But I would beg to disagree.
For the first time in my experience as a pretty well established and respected academic, it feels like AI is capable of extending what researchers, academics and scholars can achieve beyond anything we’ve seen before. And this is what I was interested in exploring.
Building on last week’s article, I decided — somewhat ironically — to focus on a concept I touched on in the footnotes of last week’s article: that of the “artisanal intellectual.”
As I wrote back then,
I think we do have to grapple with the very real possibility that AI is becoming a powerful catalyst and accelerant in research that will relegate human-only research to a class of artisanal intellectualism where the primary purpose is the provenance and process, not the product.
The idea of the artisanal intellectual was a bit of a throwaway at the time. But since then I’ve been finding myself increasingly intrigued by it — including mentioning it in a number of panel discussions and in the latest episode of the Modem Futura podcast.
What made it particularly relevant to the current exercise is that it’s not a term that’s been widely used or written about in the past. And so Deep Research was going to have its work cut out as it researched it.
My process — and this is covered in the Notes below — was to get Deep Research to take the first pass at researching and writing an article, and then for me to go in and make line by line edits, checking, adjusting, and adding in citations along the way.
The result is substantially better than anything I could have pulled together on my own in the time I had. And the final article is definitely better than Deep Research’s initial draft — and what the AI could have produced unaided.
I’d argue that the collaboration is also genuinely generative as it explores and expands understanding around what it might mean to be an artisanal intellectual in an age of AI.
This is, admittedly, a rather quick and dirty demonstration of what can be achieved through collaborating with the cutting edge of AI. But it’s also one that would not have been possible a mere couple of weeks ago.
It’s also a process that has significantly advanced my thinking around how an academic working with the cutting edge of AI can far surpass what either could achieve on their own.
Let me know what you think of the experiment and the ideas, and do check out the Notes if you’re interested in the process and where it led.
The Artisanal Intellectual in the Age of AI
There’s a growing sense that the cutting edge of AI is beginning to upend ideas around what it means to be a scholar or an intellectual – or an academic for that matter – that have persisted for millennia. Our ability as humans to think, to reason, to develop and explore abstract ideas, and to conceive of ways of understanding the world beyond what we can experience directly, has been core to defining who and what we are almost since the dawn of homo sapiens.
And yet the emergence of AI models that simulate – and may soon exceed – human-level thinking and reasoning – is beginning to challenge the value of human-only intellectual endeavors. Reasoning models like OpenAI’s o1 and DeepSeek’s R3 have only been out a matter of months yet are already being used to augment research and scholarship in potentially transformative ways. And with the release of OpenAI’s latest reasoning model Deep Research, it feels like we’re on an accelerating path toward human intellectual exceptionalism being a somewhat outmoded idea.
This possibility got me thinking recently about the idea of the “artisanal intellectual” – the possibility that human-only intellectual pursuits may soon be relegated to a realm of thinking that is more valued for its human provenance than its practical use. The same concept is easily applied to the idea of the artisanal scholar or artisanal academic – although I suspect that in many people’s minds most academics and scholars are already firmly in the “artisanal” camp!
The idea that what was once considered to be the pinnacle of human achievement becoming an artisanal activity in an age of AI is an intriguing one. But it’s also one that deserves framing within a much broader historical context as, useful as it might be as AI capabilities continue to advance, it’s not entirely new.
The origins of intellectual craftsmanship
The idea of the intellectual as a craftsman has deep roots, even if the term “artisanal intellectual” itself is not widely used. In classical philosophy, thinkers distinguished between different kinds of knowledge – episteme (theoretical understanding) and techne (craft or art) – recognizing that certain wisdom comes from skilled practice. Centuries later, scholars like C. Wright Mills explicitly described scholarship as a craft. In his 1959 essay “On Intellectual Craftsmanship,” Mills argued that true scholarship is “a choice of how to live as well as a choice of career”, an ethos of disciplined habits and personal commitment (Mills 2000). He portrayed the scholar as an “intellectual workman” who “forms his own self as he works towards the perfection of his craft.” This view positioned research and writing not just as tasks or outputs, but as a way of life akin to the artisan honing a skill.
This is very much reflected in Dominic Boyer’s thinking in his essay “The Medium of Foucault in Anthropology” where he defines an artisanal intellectual – possibly the first time the phrase is formally used – as a “knowledge-maker who has a relatively immediate and sensuous relationship to the epistemic forms s/he is producing instead of relations that are strongly capitalized, in other words, austere and market-mediated” (Boyer 2002).
Approaching intellectual traditions as craft more broadly, many such traditions reflect artisanal qualities. Medieval monastic scribes, for example, treated manuscript copying and illumination as a meticulous craft, embedding personal artistry into scholarly preservation of knowledge. Early modern scientists often built their own instruments and conducted hands-on experiments, blending manual skill with intellectual inquiry – a model of “knowledge by doing.” Michael Polanyi, a 20th-century philosopher of science, highlighted that much of what experts know is tacit and learned through apprenticeship, not through formulas (Polanyi 1966). He famously observed that “we can know more than we can tell” emphasizing that skilled understanding (like riding a bicycle or diagnosing a patient) often can’t be fully captured in explicit rules. Polanyi’s insight underscore the artisanal aspect of knowledge: master practitioners developing intuition and know-how that defies complete codification. In short, long before AI, thinkers recognized that intellectual work involves personal craftsmanship – the slow accumulation of judgment, creativity, and tacit skill.
The notion of the artisanal intellectual also resonates with broader philosophical critiques of technology. In 1954, Martin Heidegger warned that modern technology could enframe humanity, making us view the world as mere resource or “standing-reserve” (Heidegger 1954 (Translated 1977 by William Lovitt)). He contrasted this with more authentic ways of “bringing-forth” truth, akin to a craftsman’s revealing of meaning through work. Today, scholars draw on Heidegger’s insight to caution against an overly mechanized approach to knowledge. If research becomes fully automated, we risk losing the “knowledge-seeking journey itself,” the reflective process that gives scholarship its human depth (Leahy and Maynard 2025).This concern echoes through the decades: from the 19th-century Arts and Crafts movement (which valued handcraft against industrial mass production) to modern authors like Richard Sennett, who argues that “the spirit of craftsmanship” – the “desire to do a job well for its own sake” – is an enduring human impulse (Sennett 2008). Such perspectives provide a historical and philosophical backdrop for thinking about scholars in an AI age: they remind us that intellectual labor has long been cherished as a personal art, not just an output.
Contemporary Trends
In the current era of advanced artificial intelligence, the concept of an “artisanal intellectual” is gaining new salience. The phrase itself is beginning to circulate in discussions about AI and academia. For instance, a recent conversation on the Future of Being Human Substack explicitly defined “the artisanal intellectual as someone who thinks without using AI” (Maynard 2025). In other words, some modern scholars are framing the choice to not use AI tools in research and writing as a potential deliberate, value-driven stance – akin to a craftsperson choosing hand tools over power tools for greater control and authenticity. This notion is likely to become increasingly salient as AI systems grow capable of generating essays, writing and debugging code, simulating reasoning, and conducting extensive and iterative text-based research. This is where some – like Maynard – are asking whether future scholars might bifurcate into those who rely on “intelligent” machines and those who pride themselves on a more handcrafted intellectual approach (Maynard 2025).
Some academics are actively beginning to reflect on what distinguishes human intellectual labor from machine-generated work. A key theme that’s emerging is authenticity and process. Critics of heavy AI reliance argue that something essential is lost when people outsource thinking and writing to algorithms. They worry about the erosion of what one writer called the “hard thinking and writing work” behind truly well-crafted papers (Lindebaum 2025). Scholars like Dirk Lindebaum caution that using tools like ChatGPT to speed up publications can become a temptation to “skip the hard thinking…in pursuit of a longer list of papers,” ultimately impoverishing analytical skills. In surveys, academics voice fears that over-reliance on generative AI will deskill researchers and “lead to disinvestment of and alienation from authentic and idealised versions of academic personhood” (Watermeyer, Lanclos et al. 2024). In other words, if one lets an AI assemble literature, formulate arguments, or polish prose, the researcher might gradually lose the craft of those activities – much as a craftsman loses skill when a machine takes over the handiwork.
In response, there’s an emerging trend toward some scholars advocating for a “slow scholarship” or human-centered approach – the equivalent of a scholarly “slow food” movement – pushing back against the pressure to use AI for constant efficiency. Rather than celebrating AI’s ability to churn out more content, these voices emphasize quality, deliberation, and the uniquely human elements of research. For example, a recent analysis noted that while generative AI can save time on drudgery, few academics were using the “gift of time” to engage in deeper reflection or creativity (Watermeyer, Lanclos et al. 2024). Instead, most were simply increasing their output to meet performance metrics. This has led commentators like Watermeyer et al. to ask whether academia is “losing sight of the significance of tasks that are easily automated.” Some argue that the process of research – reading, note-taking, writing and rewriting – has its own scholarly value in developing understanding. If AI streamlines these processes too much, it may introduce what one paper calls “algorithmic conformity” (Liel and Zalmanson 2024), squeezing out serendipity and the unexpected insights that come from wrestling with information personally. In essence, a human-centric trend in scholarship calls for maintaining the craft of inquiry: encouraging academics (and students) to do some things “the hard way” to preserve imagination and critical thinking.
Yet not all contemporary discussion sets human and AI in opposition. Many scholars and educators are exploring ways to integrate AI as a tool rather than a replacement – echoing ideas from early computing pioneers about “augmenting human intellect” rather than supplanting it. The concept of the scholar as a kind of cyborg craftsman is emerging: using AI for routine tasks while consciously adding human judgment, ethics, and creativity on top. In practical terms, researchers might use AI to summarize literature or generate data but then apply their own critical analysis to interpret results. Or AI might serve as a brainstorming partner that offers on-demand research insights, while the human scholar curates the meaningful questions and ensures intellectual rigor. Effecting this, educators are formulating guidelines for when “generative AI should be used and when it shouldn’t,” trying to carve out which academic tasks must remain human-driven for integrity’s sake (Watermeyer, Lanclos et al. 2024).
This integrative approach treats AI like a powerful new instrument in the scholar’s toolbox – akin to a microscope or a word processor – that can amplify human capability, although rapidly expanding capabilities are increasingly rendering such analogies weak compared to the extent that AI is beginning to open up new possibilities. Proponents argue that if used judiciously, AI can handle mundane workload (grant formatting, basic coding, proofreading), potentially freeing human academics to focus on higher-order thinking or creative synthesis. The challenge, as widely noted, is defining the boundaries: how to harness AI’s benefits without diluting the intellectual craft. Ongoing dialogues in academic communities and publications reflect this balancing act, as the academy feels out a new relationship (and tension) between automated intelligence and artisanal intelligence (Jones 2025, Lee, Sarkar et al. 2025, Wiley 2025).
Future Speculation
Looking ahead, the acceleration of AI capabilities – especially in reasoning, research assistance, and text generation – promises to dramatically reshape intellectual pursuits. Advanced AI systems are increasingly able to produce writing that passes for human, solve complex analytical problems, or even generate new hypotheses from data. This raises provocative questions: What will “intellectual work” mean when a machine can draft a competent scholarly article or synthesize a field’s literature in minutes? We are likely to see a spectrum of adaptation. On one end, a cadre of AI-empowered scholars will fully embrace these tools, working in tandem with AI co-researchers to achieve feats of productivity and interdisciplinary insight previously impossible. On the other end, we may see the rise of the artisanal intellectual as a conscious identity – scholars who pointedly work without AI aid (or with minimal use) to preserve a traditional mode of scholarship. Just as handmade artisan goods gained cultural value in an industrialized world, human-crafted scholarship might acquire a certain prestige or trust in an AI-saturated world.
If AI does become deeply embedded in research and knowledge creation, the notions of expertise and intellectual labor will inevitably shift. Traditionally, expertise meant having a vast store of knowledge and the ability to analyze and synthesize that knowledge in novel ways. In a future with AI, the store of knowledge will be instantly available to anyone via AI, and even basic analysis might be automated.
As a result, human expertise may be redefined to emphasize qualities that AI cannot emulate easily. This could mean greater emphasis on creative thinking, ethical judgment, contextual understanding, and interdisciplinary integration. An expert might be valued less for recalling facts or performing routine analysis (tasks AI excels at) and more for posing the right questions, interpreting nuanced human contexts, and guiding AI systems to fruitful outcomes.
Some are already foreseeing scholars becoming more like “knowledge curators” or orchestrators – selecting, validating, and weaving together insights from AI – rather than lone authors of every word. The role of intuition and tacit knowledge could become a hallmark of human experts, as these are areas where humans might still have an edge in originality and meaning-making.
In this AI-enabled future, the day-to-day work of scholars might tilt away from certain tasks. For instance, data analysis, coding, transcription, basic writing drafts, and translation are likely to be largely automated by near-future AI. Intellectual labor may become more about oversight, strategy, and big-picture synthesis. We might see researchers spending more time in high-level design of experiments or interpretation of results, while AI agents handle the grunt work. This could elevate the importance of collaboration and communication skills – explaining and contextualizing AI-generated findings to other humans – as well as the ability to verify and correct AI outputs.
Conversely, there’s a dystopian possibility that scholars could be “proletarianized” – reduced to mere supervisors of machine output, with diminished creative agency (Watermeyer, Lanclos et al. 2024, Watermeyer, Phipps et al. 2024). If universities and industries push for maximum efficiency, intellectual work could become more assembly-line in nature, with humans simply managing workflows and quality-checking AI’s products. This raises concerns about a loss of fulfillment and mastery: will academics feel like craftsmen or like cogs in a machine? The answer may depend on how intentionally we preserve the craft elements of scholarship.
There is a distinct possibility here that the authority of academics and scholarly institutions might be challenged in an AI-dominated knowledge ecosystem. But there are also arguments for ensuring that academic authority continues to be recognized. When AI can generate text that sounds authoritative, how do we ensure information quality and credibility? In the future, the personal credibility and unique voice of human scholars may become even more important – a form of intellectual branding that assures readers or the public that a human expert (with values and accountability) stands behind a piece of work. We might see a stronger emphasis on transparency about AI use in publications (e.g. statements of human contribution), and perhaps a premium placed on work that is demonstrably human-crafted, as a mark of rigor or ethical integrity.
There is a possibility that, if most people use AI, those few who opt-out – the artisanal intellectuals – might garner exclusive prestige for carrying the torch of undiluted human scholarship. This is hinted at by Watermeyer et al. as they consider the potentially transformative impacts of generative AI on academia (Watermeyer, Phipps et al. 2024). However, this could create a new inequality: only well-resourced or tenured scholars might be able to afford the slower, manual approach, while others feel pressured to use AI to stay competitive. Academic authority might thus bifurcate, with a small elite claiming the mantle of authenticity and the majority working in AI-assisted paradigms.
Optimistic vs. Cautious Visions
The future of intellectual life with advanced AI is still unwritten, and speculation ranges from optimistic to cautionary. Optimists might envision a golden age of research where humans and AI synergize. AI might rapidly crunch numbers, test permutations, generate hypotheses, draft reports, and even conduct original research, while human thinkers set directions and make conceptual leaps. In this vision, scholars could tackle grand challenges (climate, disease, fundamental physics) more effectively, leveraging AI as a tireless collaborator. Freed from some drudgery, an academic might have more time to focus on core ideas and innovative thinking, essentially doubling down on the creative craft of their discipline with AI as support. Education and learning could also be revolutionized, with AI tutors handling rote learning and freeing students and teachers for deeper mentorship and critical engagement.
Cautionary voices, however, urge that we consider what is lost in translation. If every step of reasoning and writing is accelerated, do we rob scholars of the incubation time that often sparks original insight? There is a real risk of “dehumanization” and loss of intellectual autonomy if academics become overly dependent on machine outputs (Bender 2024). Some might fear a future where research becomes a homogenized stream of AI-generated material, with human academics struggling to imprint their individuality. The extreme endpoint would be an intellectual landscape of abundant information but potentially shallow understanding – a world where much is written but less is deeply comprehended. In such a scenario, the very capacity for and claims to expertise and the notion of academia as a critical craft could diminish significantly.
Given these possibilities, it can be argued that there is a nuanced approach to the future. This will mean actively shaping practices and policies now: deciding which intellectual skills are essential to cultivate in humans, even if AI can perform them, and figuring out how to use AI to amplify rather than erode the richness of scholarly work. It also means preparing new ethical guidelines and educational methods so that the next generation of thinkers can navigate a world with AI without losing the artisan’s touch – curiosity, critical skepticism, imaginative leap-taking – that has always driven knowledge forward.
Conclusion
The concept of the “artisanal intellectual” in an AI-driven era encapsulates a vital question: What value do we place on the human craft of thinking, researching, and creating knowledge, when machines can do so much of it for us? History and philosophy remind us that intellectual pursuits have always had a craft element – a blend of skill, personal dedication, and moral purpose – that doesn’t readily translate into raw efficiency. Contemporary academics are wrestling with this balance in real time, some embracing AI’s power, others defending the sanctity of human-centric scholarship. And as we peer into the future, we can imagine profound transformations in how expertise and authority are defined.
Rather than arriving at a simple verdict of “pro-AI” or “anti-AI,” the discourse suggests we are entering an age of choice and redefinition. The artisanal intellectual may emerge as one emblem of resistance or differentiation – a commitment to the craft of intellect in a time of intelligent machines. Their role could be to ensure that the university of the future retains places for slow thinking, mentorship, and the je ne sais quoi of human insight. Meanwhile, those who integrate AI will aim to carry forward the torch of human reason with augmented capabilities, ideally without extinguishing its spark. In all cases, the challenge will be to harness new tools without losing the wisdom, creativity, and ethical reflection that define the best of scholarship. The fullest context of this topic, then, is not a battle between humans and AI, but a conversation about how to preserve and reinvent the art of intellectual work – ensuring that as our tools evolve, our minds and values evolve with them, intentionally and artfully.
References
Bender, E. M. (2024). "Resisting Dehumanization in the Age of “AI”." Current Directions in Psychological Science 33(2). https://doi.org/10.1177/096372142312172
Boyer, D. (2002). "The Medium of Foucault in Anthropology." Minnesota Review 58-60: 265-272.
Heidegger, M. (1954 (Translated 1977 by William Lovitt)). The Question Concerning Technology.
Jones, N. (2025). OpenAI’s ‘deep research’ tool: is it useful for scientists? Nature. https://doi.org/10.1038/d41586-025-00377-9
Leahy, S. and A. Maynard (2025). Modem Futura. Artisanal Intellectual: a response to OpenAI's Deep Research. [link]
Lee, H.-P., A. Sarkar, L. Tankelevitch, I. Drosos, S. Rintel, R. Banks and N. Wilson (2025). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers, Microsoft Research. https://advait.org/files/lee_2025_ai_critical_thinking_survey.pdf
Liel, Y. and L. Zalmanson (2024). "Between formal authority and authority of competence – the mechanisms of algorithmic conformity." Academy of Management Annual Meeting Proceedings 2024(1). https://doi.org/10.5465/AMPROC.2024.166bp
Lindebaum, D. (2025) "Researchers embracing ChatGPT are like turkeys voting for Christmas." Times Higher Education. https://www.timeshighereducation.com/blog/researchers-embracing-chatgpt-are-turkeys-voting-christmas
Maynard, A. (2025). "AI humility, artisanal intellectuals, and Reid Hoffman's Superagency." The Future of Being Human https://futureofbeinghuman.com/p/ai-humility-artisanal-intellectuals 2025.
Mills, C. W. (2000). Appendix: On Intellectual Craftsmanship. The Sociological Imagination. 40th Anniversary Edition. C. W. Mills and T. Gitlin, Oxford University Press.
Polanyi, M. (1966). The Tacit Dimension. Garden City, N.Y, Doubleday.
Sennett, R. (2008). The Craftsman, Yale University Press.
Watermeyer, R., D. Lanclos and L. Phipps (2024). "If generative AI is saving academics time, what are they doing with it?" LSE Impact Blog https://blogs.lse.ac.uk/impactofsocialsciences/2024/01/22/if-generative-ai-is-saving-academics-time-what-are-they-doing-with-it/.
Watermeyer, R., D. Lanclos, L. Phipps, H. Shapiro, D. Guizzo and C. Knight (2024). "Academics’ Weak(ening) Resistance to Generative AI: The Cause and Cost of Prestige?" Postdigital Science and Education. https://doi.org/10.1007/s42438-024-00524-x
Watermeyer, R., L. Phipps, D. Lanclos and C. Knight (2024). "Generative AI and the Automating of Academia." Postdigital Science and Education 6: 446–466. https://doi.org/10.1007/s42438-023-00440-6
Wiley (2025). ExplanAItions: An AI study by Wiley. https://www.wiley.com/en-us/ai-study
Notes
My process here started out with asking Deep Research for a definition of “artisanal intellectual.” This threw up some initial ideas and links and led to the following prompt:
I would like you to research and write about the concept of the "artisanal intellectual" in the context of emerging AI capabilities. This should extend to the ideas of the "artisanal scholar" and "artisanal academic" as appropriate. I would like you to provide a historic framing and background for the concept -- both explicitly as "artisanal intellectual", and writing and thinking that contributes to the idea without necessarily mentioning it explicitly; explore emerging thinking around and use of the concept specifically within the context of how emerging AI capabilities are causing a rethink of the nature of human-only intellectual pursuits amongst academics and scholars; and speculate on how rapidly emerging and accelerating AI capabilities -- especially around reasoning and research models and what these might lead to -- might affect the meaning and use of the idea.
I would like the final product you produce to be a good draft of a ~800 word article to be posted on a Substack and aimed at leading thinkers in the field
The resulting draft can be read below — it’s identical to what Deep Research produced, apart from some formatting:
As an aside, I was pretty impressed that Deep Research was able to discover, draw on, and cite two sources that were only a day old — last week’s episode of Modem Futura where we talk about the idea of the Artisanal Intellectual, and my Substack about the podcast which also came out a day before I gave Deep Research the prompt above.
And just in case anyone thinks these were my contributions to the article, Deep Research found and used them with no prompting!
Once I had this draft text from Deep Research I went through it line by line and link by link. Every reference was tracked down to the primary source where possible, and the text updated to accurately reflect the source. New sources were also added where appropriate along with further context, and the article’s style smoothed out in a number of places.
The result was a paper that, while still being somewhat quick and dirty, is far better than the original Deep Research piece, and far more thoroughly thought-out and researched that I could have achieved:
Just in case anyone’s interested, you can see how I added to and changed the original draft here:
The process was illuminating. It forced me to engage with literature I wasn’t familiar with, and learn along the way. It also demonstrated the value of the craft of the “artisanal intellectual” (i.e. me in this context) in assessing and building on the raw material produced by Deep Research.
The big takeaway though is just how synergistic and generative this collaborative process was. The time spent on it was, admittedly, rather asymmetric, with me spending several hours to Deep Research’s few minutes. But this is a foundational paper I will be using as I continue to explore the idea of the artisanal intellectual in an age of AI — and one that I think makes a useful contribution to broader thinking here.
But the story doesn’t quite finish there.
My final step was to feed my final article back to Deep Research and ask it what I had missed.
The result was what I would consider to be a next step in this research — a thought-provoking piece from Deep Research on “Missing Perspectives on the Artisanal Intellectual in an AI-Driven Era.”
I’ve attached a formatted version of this below. It comes with a rather large caveat emptor as I haven’t checked the references and the accuracy of the content.
But even given this, the ideas explored and expounded here are good — really good.
So much so that, reading this (and remembering it was produced in just a few minutes), it’s hard to imagine a future of effective intellectual labor that is not AI-enhanced.
Really dig the article. A lot of quotable lines!
Side question - do you think posting the process will be a normalized practice in the future?
I feel like seeing the intellectual process of how you change the draft is really interesting - and to me it feels like the prompt and the alterations are very much a part of what the author is conveying.
It’s almost like the key words, what to include, etc are the “special sauce” of the final product.
It also helps to show the work necessary for Copyrights new AI
Guidelines :P
Thanks Andrew, a really helpful articulation of something I've been struggling with at a personal as opposed to an academic level. What is meant by learning and the role of research and writing in this. I recently shared a book Becoming HumAIn: A Path to Flourishing Together which I did before Deep Research was available but was very much me interacting in an intellectual dialogue with LLM's. Happy to share if interested and also have an audiobook read by my AI clone!