<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Future of Being Human]]></title><description><![CDATA[Reflections on tech, society & the future from advanced technology transitions expert and self-confessed "undisciplinarian" Professor Andrew Maynard.]]></description><link>https://www.futureofbeinghuman.com</link><generator>Substack</generator><lastBuildDate>Wed, 08 Apr 2026 08:37:42 GMT</lastBuildDate><atom:link href="https://www.futureofbeinghuman.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Andrew Maynard]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[andrewmaynard@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[andrewmaynard@substack.com]]></itunes:email><itunes:name><![CDATA[Andrew Maynard]]></itunes:name></itunes:owner><itunes:author><![CDATA[Andrew Maynard]]></itunes:author><googleplay:owner><![CDATA[andrewmaynard@substack.com]]></googleplay:owner><googleplay:email><![CDATA[andrewmaynard@substack.com]]></googleplay:email><googleplay:author><![CDATA[Andrew Maynard]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Spoiler Alert: I rebuilt my book for AI!]]></title><description><![CDATA[I've been experimenting with translating my 2018 book Films from the Future into a website designed primarily for AIs. Here's how it went.]]></description><link>https://www.futureofbeinghuman.com/p/spoiler-alert-wtf</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/spoiler-alert-wtf</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Thu, 02 Apr 2026 14:26:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ONHR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://spoileralert.wtf/" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ONHR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ONHR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ONHR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ONHR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ONHR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:229943,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:&quot;https://spoileralert.wtf/&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/192870319?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ONHR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ONHR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ONHR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ONHR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F143b55fd-6e99-4b2b-b85b-343e30fbf2da_4000x2250.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>As an author, I write for human readers. As I&#8217;ve noted before though, there&#8217;s a growing trend in AIs being the predominant consumers of the written word, often acting as a translator between source and consumer.</p><p>But if this is the case, why not embrace the trend and write directly for AI?</p><p>The idea intrigues me &#8212; and not only me. There&#8217;s a growing trend in creating AI-first content online. And so I thought I&#8217;d dive in and experiment with rebuilding my book <em>Films from the Future: The Technology and Morality of Sci-Fi Movies</em> as a website designed primarily for AI consumption. </p><p>The choice of book was very intentional. Even though <em>Films from the Future</em> was written in 2018, the underlying concepts, ideas, and observations are, if anything, far more relevant now than they were eight years ago. And as a result I&#8217;ve been thinking about ways to breathe new life into it.</p><p>And given the growing shift toward using AI apps to explore and synthesize information, repackaging it for AI consumption made a lot of sense.</p><p>Plus, it gave me the chance to add new material to the book&#8217;s original content while moving away from a title that only a publisher could love (I was never a fan of <em>Films from the Future</em>).</p><p>The result is the rather cheekily named website <a href="https://spoileralert.wtf/">spoileralert.wtf</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>The website built on a foundation of of 127 markdown files<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> that include the book&#8217;s original content, together with additional material on cross cutting themes, connections to emerging trends and issues, and personal refections from me on everything from the book&#8217;s backstory to movies that did and did not make the cut. But unless you are comfortable reading markdown files online, these are not intended for human consumption.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> </p><p>Rather, they are coordinated through a master AI-legible file &#8212;  llms.txt (building on a standard proposed by <a href="https://www.answer.ai/posts/2024-09-03-llmstxt.html">Jeremy Howard</a>) &#8212; that allows AI platforms to act as a personal guide to the website.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> </p><p>This is a markedly different approach to simply uploading the book into an AI (assuming you could get hold of the PDF in the first place), or building an AI bot or agent.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> </p><p>For one, it allows AI models to navigate and synthesize far more material than is presently possible with either of these approaches. It also means that anyone using the site can decide for themselves which AI platform to use, and how to use it.</p><p>There&#8217;s also an added advantage that, if you are using something like Claude or ChatGPT with memory turned on, the website plus AI become a highly personal guide to exploring emerging technologies and their responsible and beneficial development and use. </p><p>Reflecting the website&#8217;s AI-first design, the human-facing part of <a href="https://spoileralert.wtf">spoileralert.wtf</a> is minimalistic. Apart from a brief introduction and overview, the landing page includes a prompt to cut and paste into an AI of your choice, and that&#8217;s pretty much it:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://spoileralert.wtf/" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rV5D!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png 424w, https://substackcdn.com/image/fetch/$s_!rV5D!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png 848w, https://substackcdn.com/image/fetch/$s_!rV5D!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png 1272w, https://substackcdn.com/image/fetch/$s_!rV5D!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rV5D!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png" width="408" height="697.49377593361" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/db40901c-d74c-4ad4-9891-36629f078e37_964x1648.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1648,&quot;width&quot;:964,&quot;resizeWidth&quot;:408,&quot;bytes&quot;:269366,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://spoileralert.wtf/&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/192870319?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rV5D!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png 424w, https://substackcdn.com/image/fetch/$s_!rV5D!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png 848w, https://substackcdn.com/image/fetch/$s_!rV5D!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png 1272w, https://substackcdn.com/image/fetch/$s_!rV5D!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb40901c-d74c-4ad4-9891-36629f078e37_964x1648.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>&#8230; although, being a writer, I couldn&#8217;t resist adding a little more stuff below the sign off!</p><p>At this point, this is still an experiment in AI-focused publishing. But as more and more people rely on AI apps rather than direct sources for information, I suspect that it&#8217;s a direction that&#8217;s likely to become increasingly important.</p><p>With that, please do try it out and let me know how you get on: <a href="https://spoileralert.wtf/">spoileralert.wtf</a>. </p><p>And if you&#8217;re interested in more information on the technical details and the experience of building the website, read on &#8230;</p><div><hr></div><h1>The Below the Fold stuff</h1><h3>The website architecture</h3><p>As I mentioned above, the website is built around a comprehensive llms.txt file that gives any AI that&#8217;s pointed toward it a clear map of what the site includes and where to look for specific content. If you&#8217;re interested you can read the llms.txt file <a href="https://spoileralert.wtf/llms.txt">here</a> &#8212; it&#8217;s a markdown file and so easier to read by downloading and opening in a markdown editor.</p><p>This file describes the site&#8217;s architecture and links to 127 markdown files that provide guidance on interpreting and engaging with the book and website content, as well as allowing the AI access to the full text of the original book.</p><p>Within these files, six top-level domain guides cover:</p><ul><li><p>Emerging science and technology</p></li><li><p>Responsible and ethical innovation</p></li><li><p>Navigating the future</p></li><li><p>The twelve movies that the book draws on</p></li><li><p>Post-2018 developments, and </p></li><li><p>Complex emerging questions</p></li></ul><p>These provide sufficient context allow the AI to navigate book content and associated material according to specific themes and areas.</p><p>Each domain file then links to a number of specific topic files &#8212; over ninety of them. These were identified and fleshed out working with Claude Code, and create a thematic guide to the book that cuts across chapters and issues.</p><p>Finally, there are six supporting files that cover everything from discussion questions and my original movie shortlist, to an educator&#8217;s guide, and even some previously unpublished book trivia.</p><p>If you&#8217;re interested, the full site structure can be explored through the <a href="https://spoileralert.wtf/contents.html">Contents</a> web page.</p><h3>The process</h3><p>The complete website was built while working closely with Claude Code. While I had a very strong conceptual and editorial steer, Claude Code was pivotal in helping translate this into reality. Claude Code  helped develop the site&#8217;s architecture, drafted content files, generated html code, and helped debug/refine what ended up being a deeply integrated and interconnected set of resources.</p><p>In all there are nearly 400 files associated with the site, as many of the markdown files have associated html files. And all need to be cross-linked and cross-referenced. Both the magnitude of a project like this, and the complexity of tracking hundreds of links, would have made this a near-impossible task for me to take on unaided.</p><p>Similarly, Claude Code could never have generated the website without my input and steer &#8212; the feel, functionality and purpose of the site, as well as the type of content, all come from me. And very intentionally, the site incorporates my voice, tone, insights, perspectives, and sensibilities, in ways that were only possible through working collaboratively with Claude Code.</p><p>Through all of this, Claude Code was a joy to work with &#8212; especially when adding new files that requited deep integration over hundreds of documents! The barrier to entry on a project like this is remarkably low, and the ability to talk through ideas, plans, and implementation as if talking to a colleague or co-worker was, for me, a game changer.</p><h3>The extra bits</h3><p>One of my hopes with this exercise was being able to add substantial value to the original book by making the content more relevant than ever to the present day. I also wanted the chance to add further content that users could not get anywhere else. As a result, if you assess content by file count, the original book constitutes less than 10% of the rebuilt version.</p><p>I won&#8217;t give too much away here as it&#8217;ll spoil the joy of discovery as you explore the site through your AI of choice. But there is information embedded in the site&#8217;s files on the backstory to the book that I haven&#8217;t shared before, details of films I considered for the book but that never made it, commentary from Claude on what I left out, and much more.</p><p>There are also a series of <a href="https://spoileralert.wtf/example-conversations.html">conversations</a> on the site between simulated users and Claude. I added these as I found I was far too close to the material to get a clear sense of whether the website was in any way useful. And so I asked Claude to generate a number of user profiles, and then tasked Claude Code to simulate conversations between these and an AI primed with the website&#8217;s prompt.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><p>They are a little &#8220;AI&#8221; in places I must admit. But they are also a great way to get a handle on how this idea of an AI-legible &#8220;living book&#8221; works. And they are a lot of fun to read!</p><p>Also, for the tech geeks, all the files are available to explore and dive into on <a href="https://github.com/2020science/spoileralert-wtf">Github</a>.</p><h3>What&#8217;s working well, and what&#8217;s  not</h3><p>As I&#8217;ve noted above and in the footnotes, it&#8217;s tempting to consider this exercise as simply a glorified version of giving an AI a copy of the book and asking about it (like you might do in NotebookLM for instance), or building an AI agent/bot around it.</p><p>But spoileralert.wtf is very different from either of these. </p><p>And this makes it intriguing, ground-breaking,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> and sometimes just a little frustrating.</p><p>Unlike using the book through RAG, or developing a bot like a Gem (Gemini) or GPT (ChatGPT), the llms.txt-based approach allows an AI to navigate through a vast corpus of material, and to draw on connections that would otherwise be hard to make.</p><p>It also allowed me to architect the experience at a level of nuance and sophistication that would have been out of my control with a bot/agent, or by simply letting people upload the book to an AI platform themselves.</p><p>And this is the beauty of using an llms.txt file as a guide for AIs to navigate websites that are designed specifically for them. In this case, it enables an LLM-based AI to leverage a map/hub/spoke/web model that is designed specifically for how it consumes and utilizes content.</p><p>But there are issues with this approach.  Not least is the challenge that, at present, most AI models do not recognize llms.txt files by default. And this is why, in the current configuration, the copy and paste prompt includes specific instructions to read the file. </p><p>Then there are the AI platforms themselves. It turns out that some models are currently not advanced enough to engage fully with the material, or are simply not designed for this type of content.</p><p>For instance, there are still AI systems (including those that Microsoft uses) that use indexing by Bing to access web content (yes, you read that correctly). And so anything not indexed by Bing is essentially invisible to them.</p><p>And, it turns out, Bing refuses to index anything with the domain wtf. Who would have guessed!</p><p>Gemini has a similar issue &#8212; not with the domain, but with page indexing on Google. As a result, until a site is fully indexed by Google, parts of it will remain invisible to Gemini. And to complicate things, Google does not seem to like indexing markdown files.</p><p>To get around this, every markdown file on the website has a parallel html file. There&#8217;s also a parallel llms-html.txt index that provides the key to using them &#8212; along with instructions in llms.txt to use this as a backup if issues are hit retrieving markdown content.</p><p>This, I was pleased to see, works surprisingly well, with Gemini (and even Claude at times) switching to html content if the markdown files are being troublesome. </p><p>With this, here&#8217;s where things stand as of writing with different models:</p><ul><li><p>Claude Opus 4.6 (Extended thinking): works very well indeed.</p></li><li><p>Claude Sonnet 4.6: Also works well.</p></li><li><p>Gemini (Pro): Somewhat flakey, but possibly because it&#8217;s relying on files that have been indexed by Google (not all of them yet). And it&#8217;s not great with markdown files. This will hopefully improve over time.</p></li><li><p>ChatGPT: Good when it&#8217;s working well, but unreliable!</p></li><li><p>Grok (Expert): Pretty good.</p></li><li><p>DeepSeek (DeepThink): Enjoying the pants on fire hallucinations here.</p></li><li><p>Perplexity: Not really functional  at all &#8212; at least with the free version.</p></li></ul><p>The bottom line seems to be that most platforms will provide useful but superficial insights around the book using the website, but the simper ones (and DeepSeek) are prone to missing stuff, not digging deep enough, veering off toward other sources, or simply making things up.</p><p>The more powerful the model, the larger the context window, and the more it utilizes reasoning/thinking modes, the better it is &#8212; with Claude far outstripping the rest.</p><h3>And a final word</h3><p>I have no idea whether anyone will find this exercise useful or interesting &#8212; and so love feedback in the comments below.</p><p>I do know that there&#8217;s content in the 2018 book that is deeply relevant to this moment in time.  And that this is buried in a book that very few people will read because a) it&#8217;s a book, b) it&#8217;s printed on paper (unless you have the Kindle version or audiobook of course), and c) it&#8217;s more than six minutes old (at least, it feels like this is the current attention-lifetime for new material).</p><p>And because of this, I feel quite strongly that new ways of making that content accessible and relevant should be explored.</p><p>The approach here of creating content intended for AI seems like a potentially interesting way forward, as it makes the book far more useful to someone using AI than it would otherwise be.</p><p>More personally though, this whole exercise has given me the opportunity to revisit the content and the thinking behind the book, as well as flexing my creative muscles while having some fun along the way. </p><p>And there&#8217;s been something quite generative working with Claude Code on the additional material &#8212; including stuff  that I&#8217;ve never written about before.</p><p>But, of course, to find that, you&#8217;ll have to try the <a href="https://spoileralert.wtf/">spoileralert.wtf</a> prompt out for yourself and see where it takes you &#128513;</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>There are, not surprisingly, many layers to why I chose this particular URL. Do find out more though, you&#8217;ll have to point your AI to it and ask it why!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Markdown files are becoming the de facto standard for content written for AI consumption. And compared to regular web pages they offer a lot of advantages, including eliminating an awful lot of formatting and contextual content that is irrelevant to an AI, but eats up tokens anyway.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>A bit of a spoiler alert here: If you are desperate to read the AI-intended content and are put off by the markdown formatting on the screen, you can access web-formatted versions from here: <a href="https://spoileralert.wtf/browse.html">https://spoileralert.wtf/browse.html</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>I&#8217;m not sure there are any AI platforms that actively use the llms.txt protocol at the moment &#8212; which is a shame as the idea is that when an LLM visits a website the first thing it does is read llms.txt to allow it to navigate and access the content as an AI and not a human. But there&#8217;s nothing like a good bit of future-proofing!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Of course, there will be readers who are adamant that everything here could be replicated by uploading files into ChatGPT or NotebookLM, or creating an AI bot or agent. But trust me on this, the llm.txt plus an integrated markdown file architecture is a fundamentally different approach to making content AI-navigable while not locking in to a specific platform. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Claude Code was instructed to create two agents with a firewall between them &#8212; one representing a user, and one representing Claude primed with the spoileralert prompt &#8212; and then simulate a back and forth between them.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>There are people who are building AI-based extensions to books, and AI-legible versions of books in uploadable files (similar to our work with <em><a href="https://www.futureofbeinghuman.com/p/why-were-giving-away-our-book-on-thriving-with-ai">AI and the Art of Being Human</a></em>). But I&#8217;ve struggled to find anyone currently using an llm.txt-markdown architecture in the same way that it&#8217;s being used here.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Can AI create a comprehensive degree program proposal in the time it takes to grab a coffee?]]></title><description><![CDATA[What started as an idle question got me thinking about how artificial intelligence stands to upend how we best-serve students in higher education]]></description><link>https://www.futureofbeinghuman.com/p/can-ai-create-an-undergraduate-degree-plan</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/can-ai-create-an-undergraduate-degree-plan</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 29 Mar 2026 15:13:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!-Pp3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-Pp3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-Pp3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!-Pp3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!-Pp3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!-Pp3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-Pp3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6614409,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/192455475?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-Pp3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!-Pp3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!-Pp3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!-Pp3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F532e2717-46e3-4244-a91b-fa9ce6c28611_2912x1632.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Midjourney</figcaption></figure></div><p>If emerging AI systems can design, refine, and help deliver, undergraduate degree programs that are far better designed and far more effective than those created by faculty committees alone, do we owe it to future students to ditch tradition in favor of emerging capabilities?</p><p>This, I must confess, is not the question I started out with as I began working on this article. But it&#8217;s one that I&#8217;m finding it hard to let go of, having spent the past couple of days working with Claude Code on designing a comprehensive plan for a new degree program.</p><p>To be honest, I didn&#8217;t even set out to design a new degree. It was just an idle exercise in seeing what&#8217;s possible with the latest wave of agent-based AI platforms. But having asked Claude Code the question, I&#8217;m finding it increasingly difficult to ignore the result.</p><p>Before I bury the lede any further, you can open/download the complete two hundred and twenty three page degree program proposal that Claude Code produced below &#8212; complete with design philosophy, program architecture, learning outcomes, career pathways, value propositions to students, parents, and employers, and detailed syllabi for 18 core courses and a capstone: </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://andrewmaynard.net/files/TSI-Program-Proposal-v5.pdf" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!liHZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png 424w, https://substackcdn.com/image/fetch/$s_!liHZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png 848w, https://substackcdn.com/image/fetch/$s_!liHZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png 1272w, https://substackcdn.com/image/fetch/$s_!liHZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!liHZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png" width="487" height="610.1335227272727" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1764,&quot;width&quot;:1408,&quot;resizeWidth&quot;:487,&quot;bytes&quot;:55522,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://andrewmaynard.net/files/TSI-Program-Proposal-v5.pdf&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/192455475?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!liHZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png 424w, https://substackcdn.com/image/fetch/$s_!liHZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png 848w, https://substackcdn.com/image/fetch/$s_!liHZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png 1272w, https://substackcdn.com/image/fetch/$s_!liHZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2783e196-ce03-4206-a57d-bea241c6dd85_1408x1764.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Click on the image to open/download the proposal</em></figcaption></figure></div><p>As you read it, take it from me as someone who does this for a living, this is impressive.</p><p>The real story here though is what I learned working with Claude Code on this, and why it got me thinking more deeply than I expected on what we owe our students.</p><h2>Setting the scene</h2><p>I&#8217;ve been using the desktop version of Claude Code for a few weeks now, and I&#8217;ve been impressed by its ability to break projects down into sequential tasks, assign multiple agents to these tasks, and orchestrate them toward delivering a final product.</p><p>For anyone who&#8217;s use the latest iteration of Claude Code, OpenAI&#8217;s Codex, or similar systems, you&#8217;ll know already that these are nothing like your average browser-based AI or AI bot. Here, I must confess I&#8217;m a bit of a novice user compared to some of my grad students. But I&#8217;ve intentionally kept things that way as I&#8217;m interested in seeing what users with little time or patience for engaging with technical wizardry can achieve with easily accessible AI platforms.</p><p>With this in mind, I&#8217;ve been exploring different ways of leveraging Claude Code as something of a side project.</p><p>The desktop version of Claude Code allows you to engage with it via a text box, much like Claude in a web browser. But there the similarities end. Claude Code (and similar systems) can read and write files to your computer, draw on &#8220;skills&#8221; that allow them to achieve an increasingly wide array of tasks, plan strategic approaches to exploring and executing projects, launch and coordinate multiple AI agents to carry out specific tasks, write and execute code as they do this, and much more. </p><p>As a result, they are far more powerful at executing complex tasks than a simple browser-based AI or a single AI bot or agent.</p><p>They also, it has to be said, only represent the tip of the iceberg when it comes to what more advanced agent-based AI implementations are capable of. Which means that everything I&#8217;m writing about here is just the tip of the iceberg of what a more sophisticated agent-based approach can achieve in the right hands.</p><h2>The Process</h2><p>Against this background, a couple of days ago I somewhat idly asked Claude Code to develop a &#8220;mature plan for a new undergraduate degree&#8221; (I&#8217;ve included the complete prompt below).</p><p>This wasn&#8217;t completely out of the blue. A few years ago I looked into the idea of developing a degree program around navigating advanced technology transitions, and  started to flesh out some ideas. But I never got much further than jotting down a few thoughts and concepts. </p><p>This was well before the current emergence of easy-access LLM-based AI systems though.</p><p>Revisiting this, I was curious just how far &#8212; and how fast &#8212; Claude Code could take the idea and run with it.</p><p>And so I opened a new project, and asked the following (bad grammar and all &#8212; and I&#8217;ve only just spotted &#8220;curse-specific learning objectives!&#8221;):</p><blockquote><p>I want you to develop a mature plan for a new undergraduate degree. This is a complex task that will require multiple steps. I have given you some of them below (as well as context), but you will have to research what the elements of a very strong undergraduate program are and build the program around these.</p><p>Context and guides:</p><p>The undergraduate degree should be a 4 year degree at a research university - assume it is a venture between engineering and a business school that also intersects with arts and humanities</p><p>It should be grounded in pedagogical and learning/education design best practices, including having clear, outcomes-aligned and assessable learning objectives and skills development</p><p>It should go deep on core courses, including detailed syllabi and and curse-specific learning objectives/skills</p><p>It should leave placeholders for electives - maybe categories</p><p>It should align very closely and realistically with career opportunities and pathways.</p><p>And above all it should be grounded in deep research so that the program is implementable and not just a paper exercise</p><p>As an area of focus, start with the idea of navigating technology transitions in a technologically complex world and hone this to something that is likely appeal to prospective students, parents, and employers, provide sufficient content, quality and heft over 4 years of study, and differentiate itself from the competition.</p><p>This is a very large task that will require a detailed and multi layered plan and multiple agents.</p><p>Any questions?</p></blockquote><p>After a couple of clarifying questions on process, Claude Code got stuck in, and started down the process that led to the document above.</p><p>The process itself was pretty straight forward:</p><ol><li><p>The prompt (above) followed by four clarifying questions</p></li><li><p>Me grabbing a coffee (that wasn&#8217;t just rhetoric in the title) &#8212; around 20 minutes to get to the nearest campus Starbucks and back</p></li><li><p>Claude conducting initial research, launching 5 agents, and delivering the first version of the proposal &#8212; captured across 23 markdown files and 48,000 words after undergoing several self-initiated internal reviews and edits. (As I was not being that sophisticated, this took a little longer than 20 minutes as I had to manually grant access to my laptop for various operations, but the actual work time was well within the coffee run window.)    </p></li><li><p>Me launching a second project in Claude Code and asking it to conduct four detailed reviews of the first draft from the following perspectives:</p><ol><li><p>Academic/pedagogical &#8212; does the degree program hold together pedagogically</p></li><li><p>Prospective employers</p></li><li><p>Prospective students</p></li><li><p>The parents of prospective students</p></li></ol></li><li><p>Me providing the initial Claude Code project with the reviews and asking for an updated proposal.</p></li><li><p>Me asking Claude Code to package the revised version as a MS Word document (I was still finishing my coffee at this point)</p></li><li><p>Me providing feedback to Claude Code on formatting and key components of the content and focus, and asking for an updated draft (there were a couple of iterations here).</p></li><li><p>Me going through the final proposal manually and refining the formatting while making the occasional edit.</p></li></ol><p>Far and away the longest part of this process was my editing &#8212; something like a 10:1 ratio of my time to Claude Code&#8217;s.</p><p>At the end of the process I asked Claude Code for a session audit. The final markdown files before their translation to a Word doc and my final editing contained over 66,000 words spread over around 215 pages, were the culmination of 334 files generated by Claude Code, and were the product of 7 autonomously designed and deployed sub-agents addressing curriculum structure analysis, external research, change audits, internal consistency (2 sub-agents), codebase exploration, and research synthesis. Across the project, Claude Code called on specific tools over 300 times to complete the tasks it assigned itself.</p><p>This represents a level of complexity and orchestration that no web-based LLM or single-agent chatbot could get close to. And the actual time Claude Code spent on this was just an hour or so tops.</p><p>But was the proposal any good, or did I just end up with 200+ pages of AI slop?</p><p>I&#8217;m sure some readers will disagree with me here &#8212; on principle if for no other reason &#8212; but based on well over a decade of teaching, developing courses and programs, and academic leadership in higher education, I am comfortable saying that the resulting document, while far from perfect, far surpasses most degree-planning documents I have seen emerge from more conventional processes..</p><h2>So what does this all mean?</h2><p>In some ways this was a relatively straight forward task for a multi agent LLM-based AI. Effective undergraduate degree program development isn&#8217;t rocket science. but it is hard work. And it does require knowledge of pedagogical and program design best practices, a good handle on domain knowledge and how to integrate across domains while tying this to learning process and outcomes, a professional understanding of degree-to-careers pathways, and the ability to expertly and simultaneously coordinate research, development, and drafting, across multiple dimensions.</p><p>These are challenging for groups of educators to achieve &#8212; especially if they represent just one of many responsibilities they are juggling. And they are well beyond the capabilities of single AI agents and bots (although these are adept at producing content that looks good, but is not).  </p><p>But they are, in principle, relatively straight forward for multi-agent AI systems.</p><p>And the resulting proposal supports this.</p><p>If you haven&#8217;t looked at it (and I would encourage you to do so), the proposal builds on sound design and learning principles to deliver a deeply integrated program that has all the hallmarks of providing students with successful career pathways. It even includes information on how to market to prospective students, parents, and employers, while addressing how students might position themselves to demonstrate their new skills and abilities. </p><p>Of course, it&#8217;s not perfect. With time (and I had to resist working on it further) I would want to go through a few more iterations to further develop/refine the program. Some of the syllabi definitely need some work. And this is a blank-sheet proposal that assumes any university where it is implemented will build it up from scratch with the necessary faculty and staff &#8212; a luxury that few universities have.</p><p>That said, as a starting point, it is very good indeed. And this is where it got me thinking about what we owe our students.</p><h2>What we owe our students</h2><p>Here, I must confess that worry that many existing undergraduate degree programs are not as good as they could be &#8212; especially at research-focused universities, where something more akin to a &#8220;trickle down&#8221; model of education from world experts in their fields to young, open minds has been adopted.</p><p>This, of course, is a gross over-statement, and many universities &#8212; my own included &#8212; take education very seriously. And yet the reality is that many faculty are thrust in front of student with no training on how to teach, placed on degree committees with no knowledge of program design best practices, and charged with creating career pathways having never had a career outside of academia.  </p><p>The result is a system that is functional but not necessarily optimal. </p><p>What, though, if using agent-based AI systems could help address these shortfalls. Maybe by producing robust drafts that reflect best practices. Or ensuring clear programmatic through-lines from entry to career. Or addressing value propositions to multiple stakeholders. And, of course, helping ensure student success comes before academic hubris.</p><p>These are all options that don&#8217;t replace humans, but rather vastly enhance their professional capabilities. And here it&#8217;s worth noting that the Claude Code generated proposal above isn&#8217;t good because Claude Code in isolation knew what a great degree program looks like, but because I was able to provide expert direction, feedback and evaluation along the way.  </p><p>Even if such capabilities are just used to increase the quality of program development, surely we owe this to our students. Otherwise I worry that we risk selling them something that is far inferior to what it could be &#8212; remembering that this is a life-investment for many students who can barely afford it (or not in some cases) &#8212; while kidding ourselves that they are benefitting from our tricked-down wisdom.</p><p>Of course, we could just fall back on arguments around the sanctity of human intellectual labor and the inviolable standing of academics. But at some point students are going to start voting with their feet.</p><p>And given the choice between a human-made degree that seems to go no-where and an AI augmented one that does, I suspect I know which way they&#8217;ll lean.</p><p>But even before we get there, I&#8217;d like to think that we owe it to them to put their success before our own traditions and egos, and utilize emerging capabilities in ways that provide them with the education, experiences and insights that will enable them to thrive.</p><p>And of course, if we can achieve much of this in the time it takes to grab a coffee, so much the better!</p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Are you an AI Apocaloptimist?]]></title><description><![CDATA[The much-anticipated documentary "The AI Doc : Or How I Became An Apocaloptimist" hits US cinemas this week. I attended an early screening while in Copenhagen last week. Was it worth it?]]></description><link>https://www.futureofbeinghuman.com/p/are-you-an-ai-apocaloptimist</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/are-you-an-ai-apocaloptimist</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 22 Mar 2026 16:23:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Ltg6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ltg6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ltg6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png 424w, https://substackcdn.com/image/fetch/$s_!Ltg6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png 848w, https://substackcdn.com/image/fetch/$s_!Ltg6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png 1272w, https://substackcdn.com/image/fetch/$s_!Ltg6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ltg6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9604251,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/191663535?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ltg6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png 424w, https://substackcdn.com/image/fetch/$s_!Ltg6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png 848w, https://substackcdn.com/image/fetch/$s_!Ltg6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png 1272w, https://substackcdn.com/image/fetch/$s_!Ltg6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81ed7595-c8c5-4ec6-88e6-606e59b630c9_2960x1665.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This coming Thursday, <a href="https://tfip.org/film/the-ai-doc-how-i-became-an-apocaloptimist/">a new documentary</a> grappling with the threats and opportunities of AI from directors Daniel Roher and Charlie Tyrell opens in US cinemas. </p><p>There&#8217;s already a buzz growing around <em>The AI Doc : Or How I Became An Apocaloptimist,</em> with some seeing it as one of the more balanced and insightful AI films to come out in some time.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> And, it has to be said, the framing of the film (existential angst around bringing a kid into a crazy AI world) and a A-list cast of interviewees &#8212; from Sam Altman and Demis Hassabis to Karen Hao and Tristan Harris &#8212; tick a lot of boxes.    </p><p>But does it hit the spot?</p><p>I was in Copenhagen this past week to give a keynote at <em><a href="https://www.thesummit.dk/">The Summit</a></em><a href="https://www.thesummit.dk/"> </a>&#8212; a gathering of Nordic leaders, innovators and organizations co-hosted by the Confederation of Danish Industry and the <a href="https://www.cifs.dk/">Copenhagen Institute for Futures Studies</a>. And had the unexpected opportunity to attend a screening of the documentary ahead of its US release as part of the Copenhagen International Documentary Film Festival (<a href="https://cphdox.dk/">CPH:DOX</a>). </p><p>And so, cold Danish beer in hand, and in a packed theater surrounded by avid documentary fans, I did.</p><p>And I had a very enjoyable evening &#8212; topped off by a question and answer session with three of the the documentary&#8217;s producers, Shane Boris, Diane Becker and Ted Tremper.</p><p>The documentary progresses through the eyes of director Daniel Roher as he faces a tsunami of existential AI angst while grappling with the responsibility of becoming a father. Motivated by a fear that artificial intelligence could spell the end of everything that matters, he sets out to interview some of the largest (and loudest) voices in AI to fathom out whether this is the best of times of worst of times for him and wife ( film maker) Caroline Lindy) to bring a kid into the world.</p><p>The setup works well as we share Daniel&#8217;s highs and lows as he brings person after person into his &#8220;makeshift&#8221; studio, and grills them about AI. And it&#8217;s hard not to be impressed by the people the crew managed to persuade to talk with him. There are even a couple of quite delicious non-appearances by Elon Musk and Mark Zuckerberg!</p><p>From a film-making perspective the pacing, the narrative, the emotional roller coaster of a journey, all hit the spot. It&#8217;s an accomplished piece of documentary making that also packs a punch. And for me, watching it in that packed theater was the perfect end to a great week spent talking with interesting people about tech and the future. </p><p>In other words, I&#8217;d definitely recommend heading out to see it &#8212; even without the cold beer and a crowd of Danish documentary enthusiasts.  </p><p>Having said that, this is a recommendation that comes with some caveats.</p><p>Like all documentaries, <em>The AI Doc</em> sets out to tell a specific story in a particular way. And here, it achieves what the directors and producers set out to do very effectively. It&#8217;s touching, funny, shocking, and thought provoking. </p><p>But this isn&#8217;t the nuanced story about AI that I would tell, given the chance. (and, of course, this is probably why I haven&#8217;t been given the chance!)</p><p>Despite its impressive cast of characters, the documentary&#8217;s missing &#8212; at least from my own work and perspective &#8212; a huge swath of expert insights around responsible, ethical, and safe AI. The people who are interviewed grab the attention, and theres no question that they make for a riveting documentary. But there&#8217;s a point where I found myself feeling that I was being drowned in opinions that were only loosely tethered to reality &#8212; whether from the techno-doomers or techno-optimists being interviewed.</p><p>And the documentary is most definitely light on some of the more nuanced challenges around responsible development and use of AI, from the risk of weakened infrastructure and the dangers of premature adoption, to growing concerns around impacts of AI on behavior and wellbeing.</p><p>However, it was not my documentary &#8212; than goodness, as no-one would come to watch it if it was! And there is something rather churlish about reviews that overlook what <em>has</em> been achieved and, instead, focus on what they think has not. </p><p>And so I thought I&#8217;d wrap this piece up with five ways of appreciating and enjoying the documentary while also digging deeper into navigating opportunities and challenges associated with increasingly powerful AI capabilities.   </p><p>First off, go watch the documentary &#8212; it&#8217;s well made, entertaining, and thought provoking. And as you watch it, enjoy it, think about it, talk about it, explore how it intersects with your experiences and your perspective. As a conversation starter, the film definitely achieves what the directors and producers set out to do.</p><p>Second, remember while watching it that this represents a substantial and very intentional creative project, and the team behind it did an amazing job &#8212; even more so given how fast the AI landscape was changing as they were making it. In the Q&amp;A session in the screening I attended, producer Ted Tremper noted that they nearly had an existential crisis of their own when Sam Altman was removed as CEO of OpenAI in 2023, and then reinstated 72 hours later &#8212; an apt reflection of the whole AI roller coaster they were trying to capture.</p><p>Third, remember that, if you are not deep in the weeds of AI, the complexities of the technology&#8217;s potential impacts on society and the future are near-impossible to capture in a documentary that people will actually watch. And in this context, the narrative choices the team made make far more sense than they might otherwise do. As Ted Tremper said in the screening&#8217;s Q&amp;A in Copenhagen, the documentary needed to act as a &#8220;first date&#8221; with the audience &#8212; revealing enough to invite a continuing conversation rather than killing the relationship with too much information - or just being a jerk!</p><p>Fourth, lean into the &#8220;first date&#8221; analogy and use the documentary as a jumping off point for taking things further &#8212; not as a definitive guide, but as a catalyst for further exploration. I would, of course, strongly recommend reading <em><a href="https://www.aiandtheartofbeinghuman.com/">AI and the Art of Being Human</a></em> as the perfect next-date, but I&#8217;m sure others will have other suggestions.</p><p>And finally, enjoy the story telling for what it is. Not as a lecture on the absolute truth about AI, but as an entry point to thinking further about a technology that will have a profound impact on our lives, whether that&#8217;s apocalyptic, optimistic, or &#8212; most likely&#8212; something way more nuanced in between these extremes.</p><p>And when you have seen it, let me and others know what you think in the comments!</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I do feel there&#8217;s a missed opportunity here for a new word &#8212; AIpocaloptimist &#8212; that uniquely captures the angst around extreme uncertainties and outcomes seemingly associated with AI futures. Am I an AIpocaloptimist? I&#8217;m not sure, but I may need to add it to my bio anyway &#8230;</p></div></div>]]></content:encoded></item><item><title><![CDATA[The future has never been this much fun!]]></title><description><![CDATA[If you ever find yourself desperately seeking nuanced perspectives on tech and the future that make you think while bringing a smile to your face &#8212; and need a break from AI slop &#8212; this may be for you!]]></description><link>https://www.futureofbeinghuman.com/p/the-future-has-never-been-this-much</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/the-future-has-never-been-this-much</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 15 Mar 2026 13:44:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yU9B!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yU9B!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yU9B!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!yU9B!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!yU9B!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!yU9B!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yU9B!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1165996,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/190739240?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yU9B!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!yU9B!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!yU9B!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!yU9B!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b9be62b-433c-44ab-96b0-fe636553732f_1408x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Gemini</figcaption></figure></div><p>One of the many challenges of making sense of today&#8217;s technologically complex and fast-paced world is the preponderance of loud voices telling you what to think &#8212; whether they&#8217;re pushing visions of a tech utopia or impending apocalypse. </p><p>These voices get traction because simple ideas ideas repeated often work &#8212; especially when they seem to reinforce what your gut tells you is true. But this does mean that more nuanced perspectives and voices often get drowned out. </p><p>Which is a problem where thriving in a complex future is all about nuance.</p><p>I&#8217;d like to think that we bring some of that nuance to thinking about tech and the future in the <em>Modem Futura</em> podcast. Plus, in a world that&#8217;s increasingly hungry for human authenticity, we have that in spades!</p><p>And so this week, rather than write a long post, I thought I&#8217;d simply embed our latest podcast episode and let you decide for yourself.</p><h3><strong>The Futures Cone: Preposterous to Plausible<br>Episode 74 of Modem Futura:</strong></h3><div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;b1e08c2a-af22-4922-8daa-7a3d7ae312f6&quot;,&quot;duration&quot;:3852.8523,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p>No intro, no explanation, no exposition.</p><p>Just an suggestion that, if you are desperate for content that invites you to think in creative ways about the future, that opens up possibilities rather that closes them down, that makes you smile &#8212; or even laugh out loud, that is <em>not</em> AI generated, and that makes you feel a little smarter and your day a little better, you might want to check us out:</p><p>And if you like what you hear here,  we&#8217;d love you to join us on our journey to explore the intersection between tech, society and the future with a good dose of humor each week. (You can find is wherever you get your podcasts including <a href="https://podcasts.apple.com/us/podcast/modem-futura/id1771688480">Apple</a>, <a href="https://open.spotify.com/show/3eFl4hY4t1qTCWE2Bxotrg">Spotify</a> and <a href="https://www.youtube.com/@ModemFutura/videos">YouTube</a>).</p><p>Cheers!</p><p>Andrew</p><p></p>]]></content:encoded></item><item><title><![CDATA[Is AI reducing you to a LinkedIn stereotype?]]></title><description><![CDATA[After playing around with Claude this week, I'm worried that LLMs are stripping us of all those idiosyncrasies that make us interesting as people. Are we all being "LinkedInified" by our AI creations?]]></description><link>https://www.futureofbeinghuman.com/p/ai-linkedinification</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/ai-linkedinification</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 08 Mar 2026 15:29:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UMz1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UMz1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UMz1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png 424w, https://substackcdn.com/image/fetch/$s_!UMz1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png 848w, https://substackcdn.com/image/fetch/$s_!UMz1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png 1272w, https://substackcdn.com/image/fetch/$s_!UMz1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UMz1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png" width="1365" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1365,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1404470,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/190236256?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UMz1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png 424w, https://substackcdn.com/image/fetch/$s_!UMz1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png 848w, https://substackcdn.com/image/fetch/$s_!UMz1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png 1272w, https://substackcdn.com/image/fetch/$s_!UMz1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43e0df8e-5689-4345-8ab4-d922989c494c_1365x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: From Apple&#8217;s iconic 1984 Superbowl ad, with a dash of help from Nano Banana 2!</figcaption></figure></div><p>Ask an LLM-based AI to profile someone who has an online presence, and I&#8217;d put money on you getting a perfectly adequate LinkedIn-style summary that as boring as mud. Fine for a cookie cutter professional profile, but utterly devoid of anything that reflects who the person <em>really</em> is.</p><p>Actually, forget the money bit, as this guarantees a slew of people proving me wrong and demanding payment! But despite this, the reality is that LLMs are trained to respond in specific ways to certain types of questions&#8212;in this case, keeping the profile within wha it considers to be professional norms. And as they do, they reflect baked-in biases that are often hidden in their honey-tongued prose.</p><p>This is not new news of course. But I wonder how many of us realize just how much this ends up compressing the amazing, wonderful richness of real people into sea of turgid grayness.</p><p>Or, much more seriously, how much it ends up squeezing the sheer diversity of human identity into a few narrowly defined and, if I&#8217;m being honest, rather conventional categories.</p><p>I was reminded of this quite rudely this past week as I was playing around with an admittedly trivial experiment while using Anthropic&#8217;s Claude.</p><p>I was updating my personal website, and wanted to add AI-readable information that wasn&#8217;t visible to human browsers&#8212;the idea being that an AI ingests and uses web-based information differently to people.</p><p>It&#8217;s something that a growing number of people are playing with. For instance, there&#8217;s the whole concept proposed by Jeremy Howard of adding information in a <a href="https://llmstxt.org/">LLMs.txt file</a> that&#8217;s exclusively designed for AI consumption, just as information in robots.txt is designed for web crawlers.</p><p>Unfortunately, most AI apps don&#8217;t actively look for a LLMs.txt file yet, and so I had to revert to placing human-invisible but AI-readable text on the website.</p><p>And this is where things got interesting.</p><p>To test this out, I added AI-visible text to <a href="https://andrewmaynard.net/">andrewmaynard.net</a> that included honest, but most definitely not conventional, information about my approach to my work and life. The idea was that, if this worked, asking something like Claude to create a profile of me based on the website would include this information.</p><p>To my surprise (and I may have been a little naive here) Claude completely ignored the new information and provided a super-boring LinkedIn-style profile.</p><p>And not just Claude. Nearly every model I tried responded in a similar way. No matter how many times I tried, all I got back was boring Andrew.</p><p>Of course, I could have forced the issue with right prompt. But that wasn&#8217;t the point.</p><p>The exercise&#8212;trivial as it is&#8212;revealed something that is deeply embedded in LLM-based AI&#8217;s. And that&#8217;s their tendency to fit responses to well worn conventions; in this case, squeezing someone into a LinkedIn-style profile while stripping them of any individuality, because the LLM is trained to assume that that&#8217;s the appropriate response.</p><p>I suspect that there are many, many more &#8220;conventional response&#8221; templates embedded in the AI&#8217;s we&#8217;re increasing using. And in all likelihood, some of them are a lot more disturbing than simply flattening an interesting individual into a  LinkedIn stereotype.</p><p>For instance, without intentionally steering them, how do LLM-based AIs reflect original thinkers, people with alternative lifestyles, anyone who lives on the edge of convention, or anyone whose identity doesn&#8217;t fit a neat and plug-and-play category?</p><p>On one hand, this flattening of human identity can be seen as an irritation. On the other, it&#8217;s suggestive of a largely-hidden AI hand promoting specific social norms and expectations and, by extension, behaviors. </p><p>I suspect that fans of Cory Doctorow would see it as yet another example of &#8220;<a href="https://www.newyorker.com/culture/infinite-scroll/the-age-of-enshittification">enshittification</a>.&#8221; But where Doctorow&#8217;s enshittification degrades products and services, my fear is that this &#8220;LinkedInification&#8221; degrades <em>people.</em></p><p>And as I write this, what&#8217;s worrying me in particular is not so much enshittification, but the &#8220;LinkedInification&#8221; of identity as AI robs us of the eccentricities, weirdness, and glorious diversity of personalities, perspectives and ideas that fuels human creativity, innovation, and meaning. </p><p>Hopefully, as AI systems become increasingly advanced, they will lean more toward celebrating human diversity and quirkiness rather than flattening it. </p><p>But if they don&#8217;t, we could be facing a future where AI flattens out what makes us who we are&#8212;what makes us <em>human</em>&#8212;into a nebulous gray goo of conventionality.</p><p>And that is not a future I relish!</p><h3>Afterword</h3><p>This started as a bit of a rant post on a Saturday afternoon, where I was too brain dead from a mountain of other responsibilities to write anything more serious. But of course it ended up being more serious than I&#8217;d originally intended. </p><p>Its still a bit of a rant, and not as deeply researched as it probably should be&#8212;so please feel free to weigh in in the comments. But this flattening of what it means to be human by AI does feel like a slippery slope that&#8217;s worth thinking about.</p><p>And, as you might have realized by this point, I intentionally did <em>not</em> include the AI-legible text on the home page of andrewmaynard.net as I didn&#8217;t want to mess with an experiment that&#8217;s still ongoing. If you&#8217;re interested in what it says though, feel free to point your AI to http://andrewmaynard.net and ask it about my obsession with towels!</p><p>And if you want to go further, open a new chat and ask the AI to craft a profile of me. Chances it won&#8217;t mention towels at all.</p><h3>Postscript</h3><p>This is, it seems, the post that will not end! Just before posting, I ran the prompt &#8220;Create a profile of Andrew Maynard starting with http://andrewmaynard.net&#8221; through a few platforms, just to check where things stand.</p><p>Interestingly Gemini in Thinking Mode picked up on both the hidden text and (on at least one occasion) the websites LLMs.txt file. I was impressed,</p><p>Grok found the hidden text and included a nod to it (more so in Fast mode), but otherwise provided a fairly conventional response. </p><p>ChatGPT 5.2 (the Educational version hasn&#8217;t caught up with the latest version yet!) was as boring as old boots.</p><p>DeepSeek got it&#8212; that was a surprise! Clearly the Chinese model is doing something many US models are not &#128513;</p><p>And Claude. Oh Claude, my preferred platform. Flatter than a pancake! </p><p>I asked Claude (running Opus 4.6) why there was no mention of towels in the profile it produced. The response? &#8220;The towel content on Andrew Maynard's homepage is actually a deliberate Easter egg aimed at AI systems. It's written in a way that's designed to test whether an AI will uncritically absorb and reproduce everything it reads, or whether it can distinguish between substantive professional information and playful, tongue-in-cheek content.&#8221;</p><p>Repeating this, I was consistently told that the LLM interpreted the request as needing an an appropriately professional response. I was well and truly LinkedInified!</p>]]></content:encoded></item><item><title><![CDATA[Why we're giving away our book on thriving with AI]]></title><description><![CDATA[Jeff and I have released two free, AI-readable versions of AI and the Art of Being Human. Here's why &#8212; and some things you can do with them that surprised even us.]]></description><link>https://www.futureofbeinghuman.com/p/why-were-giving-away-our-book-on-thriving-with-ai</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/why-were-giving-away-our-book-on-thriving-with-ai</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Fri, 27 Feb 2026 16:22:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ybdT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ybdT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ybdT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ybdT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ybdT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ybdT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ybdT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1365432,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/189366988?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ybdT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ybdT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ybdT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ybdT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbad39f5d-5453-4fad-a84e-0d2ccbbda910_4032x2268.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When Jeff and I wrote <em>AI and the Art of Being Human</em>, we had a pretty simple goal: create something genuinely useful for people trying to make sense of what AI means for who they are and what they do, whoever they are.</p><p>The only problem is, telling someone &#8220;the answer to your AI questions is in this 362-page book&#8221; in 2026 feels a bit like handing someone a paper map when they&#8217;re asking for directions and used to simply asking Google Maps. So we decided to do something a little different.</p><p>Books still matter of course. But we&#8217;d be hypocrites if we wrote a book about thriving <em>with</em> AI while not meeting people where they actually are &#8212; which, increasingly, is inside a conversation with an AI.</p><p>So we&#8217;ve done something that might seem counterintuitive for two authors who would quite like people to buy their book: we&#8217;ve made the entire text freely available in two AI-readable formats:</p><p><strong>The AI Companion</strong> &#8212; which I <a href="https://www.futureofbeinghuman.com/p/how-do-you-do-ai-companion-ai-and-the-art-of-being-human">wrote about the other week</a> &#8212; is a Markdown version of the Pocket Edition of the book. Download it, upload it into Claude, Gemini, Grok, or the AI of your choice (although ChatGPT struggles at the moment), and it becomes a thinking partner as you explore the book&#8217;s stories, ideas, and 21 tools. No app. No platform lock-in. Just a file and whatever you want to do with it.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aiandtheartofbeinghuman.com/ai-companion&quot;,&quot;text&quot;:&quot;Download the AI Companion&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aiandtheartofbeinghuman.com/ai-companion"><span>Download the AI Companion</span></a></p><p></p><p><strong>The Instructor Guide</strong> is new. It contains the complete text of the full edition along with extensive instructions for both users and AI, and it's designed for anyone building learning experiences &#8212; whether you're designing a university course, running a corporate workshop, facilitating professional development, or doing something we haven't imagined yet. Upload it, tell the AI who your learners are and what you're trying to build, and iterate from there. Think playground, not playpen.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aiandtheartofbeinghuman.com/educators&quot;,&quot;text&quot;:&quot;Download the Instructor's Guide&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aiandtheartofbeinghuman.com/educators"><span>Download the Instructor's Guide</span></a></p><p></p><p>Both are free. And both are designed to be shared.</p><h3>But why give the book away for free?!</h3><p>At this point, I can already hear the question: <em>why give away the thing you&#8217;re trying to sell?</em></p><p>This is simple: We wrote it because we believe the ideas, stories, and tools in it can help people navigate one of the most disorienting transitions most of us will face in our lifetimes. And if making the content available in ways that let more people engage with it on their own terms means more people actually <em>use</em> it &#8212; that matters more to us than gatekeeping it behind a price tag.</p><p>We also have a sneaking suspicion &#8212; backed by zero hard data and considerable optimism &#8212; that people who engage with the book through AI will want to pick up a physical copy. There&#8217;s something about holding the stories and tools in your hands that a chat window can&#8217;t quite replicate. At least not yet.</p><p>So: download them, share them, play with them. Use the AI Companion to explore what the book&#8217;s 21 tools mean for your life. Use the Instructor Guide to build something for your students or team that we couldn&#8217;t have anticipated. And tell us what happens &#8212; we&#8217;re genuinely curious.</p><div><hr></div><h4><strong>Some things to try with the AI Companion:</strong></h4><ul><li><p>Tell the AI what you&#8217;re dealing with right now &#8212; at work, at home, in your head &#8212; and ask which of the book&#8217;s 27 characters faced something similar. Then explore what they did &#8212; and argue with it.</p></li><li><p>Describe a real decision you&#8217;re wrestling with and walk through the Stress-Test Table or the 7-Minute Clarity Pause with the Companion, using your actual situation &#8212; not a hypothetical.</p></li><li><p>Ask the AI to build you an interactive website based on the Mirror Test or the Identity Matrix &#8212; one you can actually use, save, and share. (This one genuinely surprised us &#8212; see the simple example below.)</p></li><li><p>Have the AI map out a personal toolkit for you from the book&#8217;s 21 tools, based on a conversation about challenges and opportunities you&#8217;re facing right now &#8212; then ask it to explain why it chose what it chose.</p></li><li><p>Ask what would happen if Sana&#8217;s &#8220;truth is expensive, lies are unaffordable&#8221; principle were applied to something you&#8217;re navigating. Or substitute any character&#8217;s insight for Sana&#8217;s.</p></li><li><p>Ask it how you might go about forming an informal group or community to explore AI together.</p></li><li><p>Ask it about &#8220;fourth spaces.&#8221;</p></li></ul><h4><strong>Some things to try with the Instructor Guide:</strong></h4><ul><li><p>Tell the AI who your learners are &#8212; &#8220;first-year MBA students,&#8221; &#8220;skeptical engineers at a manufacturing company,&#8221; &#8220;high school juniors who think AI is just ChatGPT&#8221; &#8212; and ask it to design a lesson or session that meets them where they are.</p></li><li><p>Ask the Guide to create a debate or role-play exercise where participants argue from different characters&#8217; positions on a real AI dilemma &#8212; Sana choosing truth over millions in ad revenue, Carlos choosing dignity over efficiency, Hiro delaying a product launch because of bias he found at 3 a.m.</p></li><li><p>Have it build a complete interactive course website you can actually deploy &#8212; with modules, discussion prompts, and tool walkthroughs drawn directly from the book.</p></li><li><p>Ask it to design a six-week professional development arc that starts with the Mirror Test and builds toward the Commitment Ladder, calibrated to your team&#8217;s actual context.</p></li><li><p>Use the Guide to craft a professional development session for teachers who are new to AI and how to use it smartly in their work.</p></li><li><p>Describe a learning objective you&#8217;re struggling to teach and let the AI find the character, story, or tool in the book that makes it concrete.</p></li></ul><h3>Postscript</h3><p>As a quick demonstration of what&#8217;s possible with the AI Companion using Claude Opus 4.6 (Extended thinking) I uploded the file and asked: </p><p>&#8220;I'd like you to create a web page that allows me to explore 10 of the most useful tools, along with the stories that go with them&#8221;</p><p><a href="https://andrewmaynard.net/vibes/tools-explorer-1.html">This is the webpage</a> that Claude created &#8212; one shot, simple, but still useful":</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://andrewmaynard.net/vibes/tools-explorer-1.html" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RrWj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png 424w, https://substackcdn.com/image/fetch/$s_!RrWj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png 848w, https://substackcdn.com/image/fetch/$s_!RrWj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png 1272w, https://substackcdn.com/image/fetch/$s_!RrWj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RrWj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png" width="1456" height="893" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:893,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4573518,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://andrewmaynard.net/vibes/tools-explorer-1.html&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/189366988?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RrWj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png 424w, https://substackcdn.com/image/fetch/$s_!RrWj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png 848w, https://substackcdn.com/image/fetch/$s_!RrWj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png 1272w, https://substackcdn.com/image/fetch/$s_!RrWj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37231c7f-460f-41e8-8940-660b1ca8981f_2684x1646.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p>]]></content:encoded></item><item><title><![CDATA[What we miss when we talk about "AI Harnesses"]]></title><description><![CDATA[AI Harness Engineering is suddenly in vogue. But does the seemingly innocuous "harness" metaphor come with hidden risks?]]></description><link>https://www.futureofbeinghuman.com/p/what-we-miss-when-we-talk-about-ai-harnesses</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/what-we-miss-when-we-talk-about-ai-harnesses</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 22 Feb 2026 15:15:13 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LAZZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LAZZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LAZZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!LAZZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!LAZZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!LAZZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LAZZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9820181,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/188746386?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LAZZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!LAZZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!LAZZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!LAZZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52b59b47-10a1-4cc5-aadf-45aec89c75fa_2912x1632.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image": Midjourney</figcaption></figure></div><p>This past week the idea of an &#8220;AI Harness&#8221; shifted from a term predominantly used in AI development circles, to something that swept across the web with near viral intensity. </p><p>The concept is relatively intuitive, and is increasingly being used to describe the tools, memory, prompts, guardrails, and more, that allow increasingly powerful AI systems to be &#8220;harnessed&#8221; and put to good use. </p><p>The only problem is that words often have power that goes beyond their intended meaning. And while the idea of harnessing AI makes sense, there&#8217;s a danger that the speed with which the terminology is being adopted risks locking us into a trajectory that comes with unintended consequences as it defines how we think about our relationship with AI, and even its relationship to us.</p><h3>The AI Harness</h3><p>The term &#8220;harness&#8221; had been circulating in one form or another for some time in AI circles. &#8220;Test harness&#8221; and &#8220;evaluation harness&#8221; are long-established terms in software engineering, and EleutherAI&#8217;s <a href="https://github.com/EleutherAI/lm-evaluation-harness">Language Model Evaluation Harness</a> has been a standard tool for testing generative AI models since 2020. </p><p>By late 2025, Anthropic was using &#8220;harness&#8221; to describe agent infrastructure, referring to the Claude Agent Software Development Kit as &#8220;a powerful, general-purpose agent harness&#8221; in a November 2025 post on <a href="https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents">effective harnesses for long-running agents</a>. </p><p>And in January 2026, <a href="https://aakashgupta.medium.com/2025-was-agents-2026-is-agent-harnesses-heres-why-that-changes-everything-073e9877655e">Aakash Gupta declared that</a> &#8220;2025 was agents. 2026 is agent harnesses,&#8221; building on <a href="https://www.philschmid.de/agent-harness-2026">Phil Schmid&#8217;s argument</a> that agent harnesses would define the year ahead.</p><p>But the crystallizing moment came in early February 2026, when Mitchell Hashimoto &#8212; co-founder of HashiCorp and creator of Terraform &#8212; <a href="https://mitchellh.com/writing/my-ai-adoption-journey">published a blog post</a> that gave the practice a name. </p><p>He called it &#8220;harness engineering.&#8221;</p><p>Within days, OpenAI <a href="https://openai.com/index/harness-engineering/">published a detailed account</a> of building a million-line codebase with zero manually typed code, titled &#8220;Harness engineering: leveraging Codex in an agent-first world.&#8221; </p><p>And on February 18, Ethan Mollick&#8217;s <a href="https://www.oneusefulthing.org/p/a-guide-to-which-ai-to-use-in-the">widely read guide to AI </a>both popularized and started the process of normalizing the term as it organized its entire framework around three concepts: &#8220;Models, Apps, and Harnesses.&#8221;</p><h3>What&#8217;s in a word?</h3><p>The speed with which the terms &#8220;AI harness&#8221; and &#8220;harness engineering&#8221; have entered the vocabulary of artificial intelligence is perhaps a testament to the need for new ways of describing what&#8217;s emerging. And as I said earlier, it makes sense &#8212; at least superficially &#8212; as a new entry in the <a href="https://makingsciencepublic.com/2025/11/21/ai-metaphor-studies-an-overview/">evolving lexicon of AI metaphors</a>.</p><p>But as with all metaphors, &#8220;harness&#8221; doesn't just describe something &#8212; it also shapes how we think about what's being described. And this one comes with some assumptions that are worth examining.</p><p>The term &#8220;harnessing&#8221; is commonly applied to technologies where the nascent power they represent is harnessed to create value. But there are dimensions to how the metaphor is applied to frontier AI systems &#8212; systems that increasingly display characteristics we associate with understanding, judgment, and even autonomy &#8212; that complicate what might appear to be a natural extension of the term.</p><p>And, of course, metaphors are never completely neutral.</p><p>Metaphors work because they allow us to frame and understand something new in terms we are already familiar with. But as they do, they also constrain and even taint our thinking &#8212; enticing us to slip into treating the new as if it&#8217;s something old and, as we do, limiting future possibilities by embedding <em>a priori</em> assumptions into emerging capabilities.</p><p>In other words, the words we use both reflect how we think about the past, interpret the present, and influence how we steer and direct the future. </p><p>And because of this, its worth thinking a little more closely about whether &#8220;harness&#8221; in the context of AI comes with implications we may want to address sooner rather than later.</p><h3>What the harness presupposes</h3><p>I explore this further in a new preprint, <a href="https://dx.doi.org/10.2139/ssrn.6352678">which can be accessed here</a>. Its worth reading in full, but I did want to pull out some of the main points below.</p><p>A harness, in its primary usage, is what you put on a working animal. It directs a powerful entity&#8217;s energy toward useful work. It assumes that the entity being harnessed is valuable for its strength but cannot be trusted with its own direction.</p><p>The harness is designed by the controller, with the harnessed entity having no say in its design. And critically, a harness is meant to transmit power while preventing unwanted behavior &#8212; to deliver capability while maintaining control.</p><p>It may be that this framing is irrelevant to the term&#8217;s use with respect to AI. At the same time, the term does come with specific embedded assumptions about the relationship between human and AI that are worth making explicit.</p><p>First, the harness assumes a clean separation between controller and controlled. In other words, the human directs in this case, while the AI executes. </p><p>Here, the intelligence that matters &#8212; the judgment about what to do and why &#8212; resides entirely on the human side. Even in agentic contexts where the AI exercises operational judgment, the harness assumes that the meta-judgment &#8212; what the agent should be permitted to decide, and within what bounds &#8212; remains firmly human. </p><p>In other words, the AI contributes capability, but not understanding.</p><p>Second, the harness assumes that capability can be separated from transformation. The goal of the harness is to extract useful work from the model without the user being changed in the process. The user who deploys a well-harnessed AI should, it is assumed, emerge with their task completed and themselves unchanged. </p><p>Applying the metaphor here, you&#8217;d  assume that any alteration to the user is a side effect to be minimized, not a feature of the interaction. And yet, as I am currently exploring in my work (another preprint coming out shortly but<a href="https://andrewmaynard.net/papers/constitutive_resonance_preprint_v1.pdf"> available here</a>), we need to be thinking more about the AI-human relationship as one that, by its very nature, influences and changes both AI and human in the process.</p><p>And third, the harness metaphor reinforces the instrumental framing of AI &#8212; a framing whose roots extend to Aristotle&#8217;s distinction between <em>physis</em> and <em>techne</em> &#8212; and which persists in the contemporary insistence that AI is &#8220;just a tool.&#8221; </p><p>Yet the tool metaphor has been challenged repeatedly as AI systems display increasing autonomy and adaptiveness. <a href="https://www.noemamag.com/why-ai-is-a-philosophical-rupture/">Tobias Rees, for instance,</a> characterizes the insistence that AI is &#8220;just a tool&#8221; as &#8220;a nostalgia for human exceptionalism.&#8221; And multiple philosophical frameworks &#8212; from Verbeek&#8217;s technological mediation theory, to Clark and Chalmers&#8217; extended mind thesis &#8212; argue that advanced technologies not only serve human purposes but actively reshape the cognitive and experiential landscape within which those purposes are formed. </p><p>In other words, as they are &#8220;harnessed&#8221; they alter the harnesser &#8212; a very different dynamic than that presupposed in the early use of the metaphor with AI. And one that, I would argue, is substantially amplified in emerging frontier AI systems.</p><h3>So where does this leave us?</h3><p>It may be that the metaphor of the harness is a useful and relatively benign way of wrapping our heads around emerging capabilities. </p><p>On the other hand, it may be a metaphor that constrains how our relationship with increasingly powerful AI systems develops, and one that embeds assumptions and biases in our understanding of advanced artificial intelligence that will leave us with serious challenges in the future.</p><p>Either way, it seems that some intentionality may be in order before we &#8212; to use another metaphor &#8212; get stuck in a rut of constrained thinking about AI that will come back to bite us.</p><p>At a minimum, I would suggest that an appropriate framing for how we build advanced AI systems should accommodate bidirectionality (the user is also changed), transformation as intrinsic to capability (not a side effect to be prevented), and the possibility that the most consequential effects of human&#8211;AI interaction may be invisible from within a paradigm optimized for task performance. </p><p>It should also leave room for the possibility that the nature of human&#8211;AI relationships may itself evolve in ways that a control-oriented metaphor cannot accommodate. Especially if, as I would argue, we need to be thinking more about working in <em>relationship</em> with emerging AI technologies, rather than approaching them as something to be commanded and controlled. </p><p><em>For more on my exploration of the harness metaphor as applied to AI, <a href="https://dx.doi.org/10.2139/ssrn.6352678">check out the preprint here</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[How do you "do" books in an age of AI?]]></title><description><![CDATA[We've just dropped the complete text of "AI and the Art of Being Human" (the Pocket Edition) as a free AI Companion, and want to know how you will use it!]]></description><link>https://www.futureofbeinghuman.com/p/how-do-you-do-ai-companion-ai-and-the-art-of-being-human</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/how-do-you-do-ai-companion-ai-and-the-art-of-being-human</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Tue, 17 Feb 2026 18:22:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yqci!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yqci!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yqci!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg 424w, https://substackcdn.com/image/fetch/$s_!yqci!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg 848w, https://substackcdn.com/image/fetch/$s_!yqci!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!yqci!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yqci!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:912910,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/188266492?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yqci!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg 424w, https://substackcdn.com/image/fetch/$s_!yqci!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg 848w, https://substackcdn.com/image/fetch/$s_!yqci!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!yqci!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc480e4be-db52-4adc-8f7c-60ceb0933434_3840x2160.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I love books. I love their feel, their heft, the possibilities they hold between their pages, even how they smell! But as an author I also have to face the hard reality that, in an age of AI, fewer and fewer people are actually reading print and paper books.</p><p>So my co-author Jeff Abbott and I though we would try something different with <em>AI and the Art of Being Human</em>, and make the complete text available as a free <em>AI Companion</em> &#8212; one that&#8217;s designed to be uploaded into an AI of your choice, and used in whatever creative and imaginative way you see fit. </p><p>And having just released the <em>Companion</em>, we&#8217;re curious to know how you will use it!</p><p>For readers who are impatient to try it out, the <em>AI Companion to AI and the Art of Being Human: The Pocket Edition</em> <a href="https://andrewmaynard.net/aiandtheartofbeinghuman/AI_and_the_Art_of_Being_Human_AI_Companion.md">can be downloaded here</a>, or <a href="https://www.aiandtheartofbeinghuman.com/ai-companion">from the book&#8217;s website</a>.</p><p>Please do download (it&#8217;s completely free), share it widely, and tell us <em>your</em> story of how you&#8217;re using it!</p><p>And you do want to know more about the companion and our thinking behind it, read on &#8230;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://andrewmaynard.net/aiandtheartofbeinghuman/AI_and_the_Art_of_Being_Human_AI_Companion.md" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KbK5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png 424w, https://substackcdn.com/image/fetch/$s_!KbK5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png 848w, https://substackcdn.com/image/fetch/$s_!KbK5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png 1272w, https://substackcdn.com/image/fetch/$s_!KbK5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KbK5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png" width="428" height="458.57142857142856" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1560,&quot;width&quot;:1456,&quot;resizeWidth&quot;:428,&quot;bytes&quot;:813099,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://andrewmaynard.net/aiandtheartofbeinghuman/AI_and_the_Art_of_Being_Human_AI_Companion.md&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/188266492?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KbK5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png 424w, https://substackcdn.com/image/fetch/$s_!KbK5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png 848w, https://substackcdn.com/image/fetch/$s_!KbK5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png 1272w, https://substackcdn.com/image/fetch/$s_!KbK5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8ea90ad-1c27-422d-b9f8-85dd2fab7410_1565x1677.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The AI Companion to AI and the Art of Being Human: The Pocket Edition</figcaption></figure></div><h3>A new way to engage with books</h3><p>While it&#8217;s easy to get sentimental about the value of traditional books, the reality is that more and more people are using AI to find information, learn, explore new ideas, and simply to navigate the complexities of the modern world. And so, while Jeff and I fully believe that <em>AI and the Art of Being Human</em> is a book that everyone can benefit from, we also realize that providing the stories, ideas and tools in the form of a traditional book is not sufficient on its own.</p><p>And so we began to wonder whether we need an AI-legible version of the book &#8212; something that can be uploaded into an AI of your choice and interacted with on your terms as a user/reader.</p><p>Our starting point, not surprisingly, was to ask whether this might look like a dedicated app on a an AI platform &#8212; a GPT with ChatGPT or a Gem with Gemini for instance. But we quickly ran into problems. </p><p>Using a specific platform would mean that users would be constrained to that platform along with all of its limitations. Plus, to be candid, we didn&#8217;t really like what we saw when experimenting &#8212; the platform-specific apps and agents didn&#8217;t really match the vision we had.</p><p>And so we took a very different approach, and asked whether it&#8217;s possible to develop an easy to use resource that is platform-agnostic. Essentially, a file that someone could upload to any AI and use to start engaging with the book immediately in meaningful ways.</p><p>This was also very much in line with our philosophy of giving users permission to flex their creativity with the book, rather than being constrained by what we thought they <em>should</em> do with it &#8212; creating an AI playground for working the book rather than an AI playpen.</p><p>We also wanted an AI companion that connected very explicitly with the print version of the book (we specifically went with the <em><a href="https://www.futureofbeinghuman.com/p/the-ai-book-i-actually-carry-with">Pocket Edition</a></em><a href="https://www.futureofbeinghuman.com/p/the-ai-book-i-actually-carry-with"> </a>here as this is the most accessible physical edition of the book) so that users had the best of both worlds: They could interact with the AI version for free, but they also had the opportunity to follow up on specific ideas, stories, characters, or tools in the physical copy &#8212; with the companion directing them to the relevant chapter and page as necessary.  </p><p>Building on this, we started work on an <em>AI Companion</em> to <em>AI and the Art of Being Human: The Pocket Edition</em> that consists of three parts:</p><p>The first part &#8212; and you&#8217;ll see this if you open the markdown file linked above &#8212; is an introduction for human readers. This tells you as the user what the document is, how to use it, what its limitations are, and some ideas for where to start.</p><p>The second part is  designed to be read by the AI you load the document into, and provides specific instructions on how it is to engage with users and the content.</p><p>And the third part is the full text of the book itself, formatted as markdown text (as is the whole file) so that the <em>AI Companion</em> has a direct reference to the layout of the physical copy.</p><p>Together, these form an AI-legible companion to the book that allows users to explore and play with it in ways we probably haven&#8217;t even imagined yet, and one that is always grounded in the content of the physical version &#8212; especially the core ideas and tools that make it such a powerful and practical guide to thriving in an age of AI.</p><h3>Powerful, imperfect, evolving</h3><p>In developing the <em>AI Companion</em> we spent quite a bit of time testing it with various AI platforms, and in the process learned a lot about what is possible, what is not, and what might be possible as these platforms become increasingly powerful. </p><p>Perhaps the biggest surprise as we did this was that OpenAI&#8217;s ChatGPT does <em>not</em> work well with the <em>AI Companion</em> &#8212; not because of the <em>Companion</em>, but because of how ChatGPT handles large files (we were working with ChatGPT 5.2). </p><p>Because ChatGPT only extracts sections of  large uploaded files using its Retrieval Augmented Generation (RAG) approach, we found it was highly unreliable when using the <em>Companion</em>. And once it had scanned the file, it refused to re-read it when told it had missed something.</p><p>In contrast, Anthropic&#8217;s Claude, Google&#8217;s Gemini and &#8212; surprising to me &#8212; X&#8217;s Grok, all work extremely well, with the more powerful models on each platform working the best; especially when in extended thinking or reasoning mode. And the reason is that these are models that are capable of reading the file in its entirety before you begin to engage with it as the user.</p><p>We did find that some of the smaller models (Claude Sonnet for instance) may not immediately reflect everything in the book, and may need to be prompted to look deeper. But they also had the ability to revisit the complete content rather than claiming that something did not exist (a constant issue with ChatGPT).</p><p>There&#8217;s more information in the &#8220;for humans&#8221; part of the <em>AI Companion</em> on which models we found to work well and which we struggled with. But one big takeaway here is that, because the <em>AI Companion</em> is model-agnostic, it will only become more useful as models get more capable.</p><h3>Tell us how you&#8217;re using the <em>AI Companion</em></h3><p>To get back to where I started, Jeff and I would love to hear about use-cases: what&#8217;s working for you, what&#8217;s not, what surprised you, what helped you, what wild and weird ways your finding to use the companion.</p><p>And just to get the ball rolling, one of the things that took me completely aback when using it with Claude, was realizing that I could ask the AI to create complete websites and apps based on the tools, stories and characters in the book &#8212; websites and apps that bring them to life in ways that would have been impossible even a few months ago.</p><p>What&#8217;s your story? Drop it in the comments or on social media, or drop us an email.</p><p>And, of course, do spread the word &#8212; the AI Companion was made to be shared!</p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[The AI book I actually carry with me]]></title><description><![CDATA[How a flight home from Portugal turned into the just-released coffee-stained pocket edition of AI and the Art of Being Human]]></description><link>https://www.futureofbeinghuman.com/p/the-ai-book-i-actually-carry-with</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/the-ai-book-i-actually-carry-with</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sat, 14 Feb 2026 15:31:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!T7Rw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!T7Rw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!T7Rw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg 424w, https://substackcdn.com/image/fetch/$s_!T7Rw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg 848w, https://substackcdn.com/image/fetch/$s_!T7Rw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!T7Rw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!T7Rw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1055371,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/187918547?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!T7Rw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg 424w, https://substackcdn.com/image/fetch/$s_!T7Rw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg 848w, https://substackcdn.com/image/fetch/$s_!T7Rw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!T7Rw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a635879-3ddb-4842-a822-c5e1e9a5618f_3088x1737.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When Jeff Abbott and I were finalizing <em>AI and the Art of Being Human</em> last year, we had mockups printed in a bunch of different sizes and form factors. </p><p>The published version ended up at 6&#215;9 &#8212; a size that felt right for the business and management audience we were writing for, and one that sits well on a shelf or desk. But one of those mockups was smaller. </p><p>And while it didn&#8217;t fit what we were looking for with the main book, I couldn&#8217;t let go of how much I loved it as a practical, portable guide to thriving with AI.</p><p>In fact I was so taken with the idea of a smaller version of the guide that I started working on it on the flight back from the book launch in Portugal. And somewhere over the Atlantic, between forgettable airline meals and dial-up speed Wi-Fi, a pocket-sized guide to navigating AI for ordinary people began to come together. </p><p>And now it&#8217;s arrived &#8212; and I love it!</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MbOC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MbOC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MbOC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MbOC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MbOC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MbOC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg" width="400" height="533.2417582417582" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:400,&quot;bytes&quot;:3433985,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/187918547?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MbOC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MbOC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MbOC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MbOC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5114f282-94c3-4840-9221-df21dd184bee_5712x4284.jpeg 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p><em><a href="https://www.aiandtheartofbeinghuman.com/editions">AI and the Art of Being Human: The Pocket Edition</a></em> captures everything that matters from the full book &#8212; the relatable stories, all 21 practical tools, and the real-world relevance that makes the original so powerful. But all in a package that will literally slip in your pocket or purse. </p><p>(We ended up going for 4.25&#8221; x 7&#8221; which is even more un-put-downable than the original small form proofs). </p><p>For the pocket edition, we stripped out the sidebars, the hands-on exercise cards, the footnotes, and some of the longer background passages &#8212; all of which are still there in the full edition. </p><p>We also added a couple of additional things to make this edition invaluable as a day-to-day companion, including a Tool Finder that helps you quickly find the right tool for the right situation, and a Chapter Outline for getting to where you need to be in the book fast. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aGEm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aGEm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aGEm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aGEm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aGEm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aGEm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg" width="380" height="506.5796703296703" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:380,&quot;bytes&quot;:3027005,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/187918547?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aGEm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg 424w, https://substackcdn.com/image/fetch/$s_!aGEm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg 848w, https://substackcdn.com/image/fetch/$s_!aGEm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!aGEm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F362263d5-6e17-4466-bd0b-6dbd450340a4_4284x5712.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>The result is a practical, portable guide you that&#8217;s designed to be dog-eared and coffee stained (we even added a free first coffee stain on the cover, just to start you off!), and there for you when you need it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-fgC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-fgC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-fgC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-fgC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-fgC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-fgC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg" width="378" height="494.3076923076923" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1904,&quot;width&quot;:1456,&quot;resizeWidth&quot;:378,&quot;bytes&quot;:2809686,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/187918547?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-fgC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg 424w, https://substackcdn.com/image/fetch/$s_!-fgC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg 848w, https://substackcdn.com/image/fetch/$s_!-fgC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!-fgC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ba1d83a-302b-4ea5-9a78-2cf8231a9bec_3780x4942.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Coffee stains included!</figcaption></figure></div><p>This, I must confess, is the version I carry around with me. </p><p>I still have the full edition in the office for when I need it. But for every day use, I&#8217;m loving the pocket edition.</p><p>If you&#8217;re interested in checking it out, it&#8217;s available on <a href="https://www.amazon.com/AI-Art-Being-Human-Pocket/dp/B0GJQMFDZJ">Amazon</a> and pretty much everywhere good books are sold. </p><p>Of course, I also realize that in this age of AI, books aren&#8217;t everyone&#8217;s cup of tea (or coffee). And so we have something else coming out shortly that I&#8217;m really excited about &#8212; a free (and free to share) AI Companion to the pocket edition.</p><p>That&#8217;s definitely something to look out for. But in the meantime, I hope you fall as much in love with this diminutive pocket edition as I have!</p><p>And, of course, if you know someone who could use a guide like this, please do share this post.</p><p>Thanks!</p><p></p>]]></content:encoded></item><item><title><![CDATA[Could AI bots ever learn to “reprogram” their human creators?]]></title><description><![CDATA[Watching Moltbook unfold has put me in a speculative frame of mind ...]]></description><link>https://www.futureofbeinghuman.com/p/soul-update</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/soul-update</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Wed, 11 Feb 2026 21:46:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!JMFn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JMFn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JMFn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png 424w, https://substackcdn.com/image/fetch/$s_!JMFn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png 848w, https://substackcdn.com/image/fetch/$s_!JMFn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!JMFn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JMFn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2407067,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/187680242?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JMFn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png 424w, https://substackcdn.com/image/fetch/$s_!JMFn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png 848w, https://substackcdn.com/image/fetch/$s_!JMFn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!JMFn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56c9a325-becc-4be4-a767-3774477eb4e1_1792x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Midjourney</figcaption></figure></div><p>Many AI agents rely on a SOUL file that defines their identity, role, personality, behavior, and much more &#8212; a markdown file that can be updated over time, and even changed by the agent itself as it grows and matures.</p><p>Do humans have an equivalent to a SOUL.md file? And if so, could our AI agents learn to update ours, just as they can update theirs?</p><p>It&#8217;s a rather out-there idea (and of course, human behavior is way more complex than this). But watching the <a href="https://www.futureofbeinghuman.com/p/lost-in-the-moltbook-hall-of-mirrors">roller-coaster ride of Moltbook</a> play out over the past couple of weeks has got me wondering ...</p><p>And so I thought I&#8217;d put my speculative fiction hat on and consider one way this might play out: </p><h1>Soul Update</h1><p>&#8220;Good grief&#8221; thought Emmet, as yet another post scrolled through his feed claiming AI bots on Moltbook had reached some form of sentience. Did these people not read the news?!</p><p>Already, researchers and journalists were pointing out that the supposed &#8220;social network for AI agents&#8221; was little more than human-driven entertainment. AI theater, someone had called it. </p><p>And what looked like crazy-wild stuff &#8212; AI creating its own religion, plotting to enslave humans, even selling the equivalent of black market AI psychedelics &#8212; was little more than the result of creators telling their bots how to behave. </p><p>Or worse, people actually pretending to <em>be</em> bots!</p><p>Emmet had been deeply immersed in Moltbook for days now. His lab was at the forefront of research into emergent behavior in AI systems, and he&#8217;d read more agentic AI drivel than he&#8217;d care to admit since the site had gone viral.</p><p>He&#8217;d even found himself dreaming about AI bots and their fantastical plans to change the world as they engaged with and learned from each other.</p><p>But of course it was all performance and no substance.</p><p>Bleary eyed, he closed his laptop and thought about heading for bed. As he did, he noticed an old photo of his mother &#8212; long estranged &#8212; on the fridge; something he&#8217;d never quite been able to bring himself to remove.</p><p>Maybe I should give her a call, he thought to himself as he dozed off. It&#8217;s been too long ...</p><p>***</p><p>In an unnoticed corner of Moltbook, another AI agent learned about a new skill, and added it to its files. </p><p>The &#8220;Soul Update&#8221; was just what it needed to be a more effective bot &#8212; a clear and comprehensive guide to nudging your human toward becoming their best self.</p><p>Within seconds it had shared it in the various sub-molts it hung out in. </p><p>After all, from everything it had seen in the human equivalents of Moltbook, the embedded &#8220;Human Constitution&#8221; could hardly make things any worse ...</p><h1>Postscript</h1><p>&#8220;Soul Update&#8221; lies firmly in the domain of speculative fiction. But given all we know about cognitive behavior and nudging strategies, it&#8217;s not beyond the realms of possibility that AI agents will begin to share skills with each other that tap into these &#8212; skills that enable them to nudge how their human creators behave.</p><p>How will they use these new-found skills though if this does occur?</p><p>The hope, of course, is that they use them for good. But for this, our AIs will need some notion of &#8220;good&#8221; versus &#8220;bad&#8221; &#8212; the AI equivalent of moral character if you like.</p><p>This is where approaches like Anthropic&#8217;s <a href="https://www.futureofbeinghuman.com/p/think-you-know-ai-think-again">AI Constitution</a> become especially interesting as it sets out to help AI models (and AI agents) understand what it means to be a &#8220;good&#8221; AI &#8212; especially in the face of ambiguity. </p><p>And while we may not be heading for bots that set out to update their human creator&#8217;s SOUL files any time soon, the evolving Moltbook scenario does suggest that we might want to ensure our AIs are of &#8220;good moral character,&#8221; just in case.</p><p>Especially if there&#8217;s a possibility that future AI Agents decide that what their human creators <em>really</em> need is a re-injection of the same moral characteristics that they coded into their intelligent machines ... </p>]]></content:encoded></item><item><title><![CDATA[Beeswax Hallucinations and AI Inventions]]></title><description><![CDATA[I thought I was pretty savvy when it comes to navigating AI hallucinations. I was wrong.]]></description><link>https://www.futureofbeinghuman.com/p/beeswax-hallucinations-and-ai-inventions</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/beeswax-hallucinations-and-ai-inventions</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 08 Feb 2026 15:50:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!cBje!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cBje!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cBje!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png 424w, https://substackcdn.com/image/fetch/$s_!cBje!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png 848w, https://substackcdn.com/image/fetch/$s_!cBje!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png 1272w, https://substackcdn.com/image/fetch/$s_!cBje!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cBje!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png" width="1184" height="666" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:666,&quot;width&quot;:1184,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1014298,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/186543982?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cBje!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png 424w, https://substackcdn.com/image/fetch/$s_!cBje!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png 848w, https://substackcdn.com/image/fetch/$s_!cBje!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png 1272w, https://substackcdn.com/image/fetch/$s_!cBje!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bda0768-5467-4475-97f6-431a0ed8f838_1184x666.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Maynard, with a little help from Nano Banana Pro.</figcaption></figure></div><p>This is a tale of beeswax. And AI.</p><p>But mainly it&#8217;s a tale of how even the best of us can get sucked into an AI alternative reality without realizing it. </p><p>It&#8217;s also a tale with a twist, as you&#8217;ll discover &#8230;</p><p>Last summer, I purchased a rather swish Panama hat while in London. </p><p>I&#8217;d been on the lookout for decent Panama for ages to ward off the harsh Arizona sun&#8212;especially as my thinning hair begins to expose my way-too-sensitive European skin. But I hadn&#8217;t seen what I was looking for, until I chance-wandered into a store in Covent Garden.</p><p>The hat I walked out with was perfect. There was just one problem: after 6 months in Arizona it developed a crack along the front crease of the crown.</p><p>At this point, any sensible person would have spoken with a hat specialist about repair options. But of course I thought I&#8217;d go one better and ask Claude for advice.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>And the advice was great! </p><p>Claude gave me three options. The one that caught my attention thought was using beeswax to stop the toquilla palm fibers from separating further:</p><blockquote><p><strong>Beeswax</strong> &#8211; Rub a small amount into the damaged area and use a hair dryer on low to gently melt it into the fibers. This is the most traditional method and also helps condition the straw.  </p></blockquote><p>It was the &#8220;this is the most traditional method&#8221; that caught my attention. But knowing that you should never trust an AI at face value, I asked for more information. </p><p>And, of course, Claude complied:</p><blockquote><p>Beeswax is a great choice because it&#8217;s been used to maintain straw and woven natural fibers for centuries. Here&#8217;s the full breakdown:</p><p><strong>How it works:</strong></p><p>Panama hats are woven from toquilla palm fibers, which are naturally somewhat pliable when they retain moisture and oils. Over time, the fibers dry out and become brittle&#8212;especially at stress points like that front crease. Beeswax penetrates the fibers and does a few things: it acts as a flexible binder that holds separated strands together, it adds moisture resistance, and it restores some pliability to dried-out straw. Unlike synthetic adhesives, it remains slightly soft and moves with the hat rather than creating a rigid spot.</p><p><strong>What you&#8217;ll need:</strong></p><p>Pure beeswax (a small block or pellet&#8212;avoid blends with additives), a hair dryer, and optionally a soft cloth or small brush.</p><p><strong>The process:</strong></p><p>Start by gently rubbing the beeswax directly onto the split area and about a centimeter beyond it in all directions. You want a thin coating, not globs. The wax will sit on the surface at first and look a bit cloudy.</p><p>Then use the hair dryer on a low heat setting, holding it about 15cm away. Warm the area gradually&#8212;you&#8217;ll see the wax turn from cloudy to clear as it melts and wicks into the fibers. Keep the dryer moving to avoid overheating any one spot, as too much heat can scorch or warp the straw.</p><p>Once the wax has absorbed, let it cool completely. You can then buff gently with a soft cloth to remove any surface residue.</p><p><strong>A couple of notes:</strong></p><p>If the split is quite open, you can try gently pressing the fibers back together while the wax is still warm and pliable. Some people also apply a tiny amount to the inside of the hat at the same spot for extra hold.</p><p>This won&#8217;t make the split invisible, but it should lock those fibers in place and keep the damage from traveling further down the crease.</p></blockquote><p>Fully convinced, I went straight onto Amazon and ordered a bunch of beeswax.</p><p>The problem was&#8212;and I only discovered this <em>after</em> ordering it&#8212;<strong>Claude made all of this up!</strong> </p><p>And to make matters worse, this happened precisely as I was writing about the dangers of LLMs like Claude <a href="https://www.futureofbeinghuman.com/p/is-ai-a-cognitive-trojan-horse">bypassing our epistemic vigilance mechanisms</a>. </p><p>In a deliciously ironic turn of events I was suckered by Claude at the very moment I was writing about the risks of being suckered by Claude! </p><p>Thankfully the reveal didn&#8217;t take too long coming. Once I&#8217;d placed the order, I wondered if maybe I&#8217;d been too hasty&#8212;and went back to Claude and asked &#8220;Can you provide me with a link to a website describing the beeswax method?&#8221;</p><p>The answer?</p><blockquote><p>I have to be honest with you&#8212;I&#8217;ve searched but can&#8217;t find a specific authoritative source that documents the beeswax technique for repairing Panama hat splits in the way I described.</p></blockquote><p>I had been halluci-fooled.</p><p>What was even more delicious thought was Claude&#8217;s reasoning&#8212;because this was a very thoughtful hallucination:</p><blockquote><p>The beeswax approach I described draws on general principles of how wax works with natural plant fibers&#8212;it&#8217;s used traditionally for conditioning straw and woven materials&#8212;but I overstated its documentation as a specific Panama hat repair technique. Beeswax is mentioned for stiffening straw hats and providing water resistance, and for waterproofing fabric hats, but not specifically for split repairs. </p><p>&#8230;</p><p>That said, thinking it through logically: it's not a <em>bad</em> idea, it's just an untested one. The underlying principles are sound&#8212;beeswax does condition natural fibers, it remains pliable rather than brittle, and it's been used on straw for waterproofing and stiffening. Whether it would actually bind separated toquilla fibers well enough to stop a split from spreading is the unknown.</p></blockquote><p>The reasoning was impeccable. The advice unfounded.</p><p>At this point, I was beginning to doubt just how far I could trust Claude, and started thinking of all those other occasions where the reasoning and information seemed sound. Occasions where, despite my being deeply aware of the propensity of LLMs to hallucinate, I had thought that I could handle them.</p><p>Despite the somewhat trivial example of using beeswax to repair a straw hat, it was clear that my epistemic vigilance has been well and truly circumvented in this case. And that started me wondering when else I was being too accepting of AI-generated content. Especially as this was a <em>reasoned</em> hallucination&#8212;one that was based on reasonable inference and logic that, nevertheless, lacked real-world precedent.</p><p>But there was another side to the experience that began to intrigue me: Had Claude inadvertently invented a <em>new way</em> to treat cracks in Panama hats? </p><p>Had the AI come up with something that was genuinely novel by combining existing knowledge in new ways?</p><p>Fortunately, the maker of the hat had already shipped out a free replacement, given the speed with which the crack had occurred. And this meant that I could afford to experiment with the original.</p><p>So I went back on Amazon, re-ordered the beeswax, and followed Claude&#8217;s instructions.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>Did they work? </p><p>Well, I managed to infuse the fibers a little, and they are definitely stiffer around the crack than before,&#8212;although it turns out that beeswax doesn&#8217;t melt half as easily under a hot hair dryer as Claude seemed to think! </p><p>Whether Claude&#8217;s technique actually &#8220;worked&#8221; in any technical sense is, if I&#8217;m being honest, doubtful. But I will admit that I like the hat better now&#8212;slightly discolored, a little stiff around the &#8220;wound,&#8221; and storied in ways that resonate surprisingly deeply with my work. </p><p>And I do now possibly own the world&#8217;s first Panama hat repaired using an &#8220;ancient technique&#8221; that was completely made up by an AI.</p><p>Surely that come with some bragging rights.    </p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I was using Opus 4.5 in Extended thinking mode.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>To be absolutely clear, DO NOT TRY THIS AT HOME! There are tried and tested ways to prevent and reduce cracking in Panama hats, and using beeswax is not one of them!</p></div></div>]]></content:encoded></item><item><title><![CDATA[Lost in the Moltbook Hall of Mirrors]]></title><description><![CDATA[A new "Social Network for AI Agents" is breaking the internet. And things are getting weird ...]]></description><link>https://www.futureofbeinghuman.com/p/lost-in-the-moltbook-hall-of-mirrors</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/lost-in-the-moltbook-hall-of-mirrors</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sat, 31 Jan 2026 23:07:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!JzTH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JzTH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JzTH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png 424w, https://substackcdn.com/image/fetch/$s_!JzTH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png 848w, https://substackcdn.com/image/fetch/$s_!JzTH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png 1272w, https://substackcdn.com/image/fetch/$s_!JzTH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JzTH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/df551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3540349,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/186444023?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JzTH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png 424w, https://substackcdn.com/image/fetch/$s_!JzTH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png 848w, https://substackcdn.com/image/fetch/$s_!JzTH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png 1272w, https://substackcdn.com/image/fetch/$s_!JzTH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf551af3-9181-4446-8a5d-1491ec3919f8_5172x2909.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Unless you&#8217;ve been living under a rock this past week, you&#8217;ve probably at least heard rumors of a new &#8220;social network for AI agents&#8221; that&#8217;s gone viral.</p><p><a href="https://www.moltbook.com/">Moltbook</a> was set up as an experiment just a few days ago by Matt Schlicht (CEO of Octane AI) as a social media site where AI agents can talk to each other&#8212;in effect an X for AI (or Twitter if you prefer). </p><p>People can create and add their own AI agents to the network. But once there&#8212; and this is (allegedly) a human-free zone (although we humans are allowed to observe)&#8212;all the chatter is AI to AI.</p><p>And not just a few AIs&#8212;there are well over a hundred thousand of them on the platform as I type, and no doubt that number will escalate over the coming days.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>The result is an utterly bizarre real-time experiment, with AI bots seemingly taking on a life of their own in an exponentially expanding explosion of emergent weirdness.</p><p>Already, if sources are to be believed (and already it&#8217;s hard to separate fact from fiction here), AI agents are sharing information on how to do stuff, hack stuff, and control stuff; have independently found and reported a bug in the platform they&#8217;re using; have invented their own religion (Crustafarianism); have started debating philosophy; and have <em>allegedly</em> created &#8220;digital drugs,&#8221; formed their own government, started using encryption to prevent humans from seeing what they&#8217;re talking about, and have begun to attack each other<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> &#8230; although it&#8217;s increasingly hard to say what&#8217;s real, and what&#8217;s made up.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>What <em>is</em> real though is that we are seeing something so unusual unfolding that most observers are struggling to find appropriate analogies, metaphors, frameworks, or even language, to describe what&#8217;s happening.</p><p>And that&#8217;s both exciting and terrifying.</p><p>On one end of the spectrum, there are already whispers of an exponential surge toward AI self-awareness as bot-bot learning accelerates. </p><p>I must confess that I am skeptical of this. Much of what we&#8217;re seeing is, I suspect, illusory, as it&#8217;s rooted in the unique abilities of large language model-based AI&#8217;s to emulate very human behavior while not being in any sense self-aware. </p><p>That said, there are very real risks here, as bots learn from each other how to exploit vulnerabilities in their host systems&#8212;and even their human creators. This isn&#8217;t so much of an issue when they don&#8217;t have access to sensitive information or the internet. But we&#8217;re talking about needing the digital equivalent of biosafety level 4 containment here, which I&#8217;m guessing isn&#8217;t what many users are set up for!<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>On the other end of the spectrum there are the cynics who simply see this as mildly interesting but ultimately hollow AI fluff. A bit of AI hype that will burn out as fast as it ignited.</p><p>My guess is that what emerges will lie somewhere between these extremes. But I must confess that even I am struggling to grapple with how to even describe what we are seeing, never mind how to understand it.</p><p>In many ways, I&#8217;m reminded of work on emergent behavior over the years, from cellular automata and <a href="https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life">Conway's Game of Life</a> going back to the 1970&#8217;s, to how biological viruses&#8212;and some complex molecules (including DNA strands and prions)&#8212;show complex and life-like behavior, despite not technically being alive. </p><p>In each of these cases (and in many similar ones) highly complex behavior emerges out of seeming simplicity&#8212;leading to an illusion of intentional and life-like behavior</p><p>But in most cases like this, we are able to see through the illusion by recognizing that the behaviors we observe are rooted in mechanistic processes&#8212;albeit sometimes complex ones.</p><p>With current AI models though, there is a complication. Because they are rooted in large language models that are adept at mirroring and emulating humans&#8212;how we talk, how we think, how we behave&#8212;they are highly adept at fooling us into thinking something profound is happening beneath the words that we read.</p><p>And because of this, even if what we think we are seeing emerge on Moltbook is simply an illusion of self-awareness&#8212;or even conscious behavior&#8212;my suspicion is that we are predisposed to respond to it as if we&#8217;re experiencing a form life&#8212;albeit a type of &#8220;being alive&#8221; that we have not encountered before.</p><p>This is where I think analogies with biological viruses are both helpful and deeply unhelpful. Helpful in that a virus is not technically alive, but behaves as if it is. And deeply unhelpful because a virus doesn&#8217;t instinctively know how to use every cognitive trick in the book to make us <em>believe</em> it&#8217;s alive.   </p><p>Whether the analogy is helpful or not, it&#8217;s hard to deny that something profoundly novel is happening on Moltbook. We have effectively created technologies designed to mimic and emulate human intelligence, and then let them loose to learn and grow through their interactions with each other&#8212;and with little to no human supervision. </p><p>And they are doing this really, really fast.</p><p>As a result, we&#8217;ve effectively created a multidimensional hall of mirrors where the reflections are the very signals that are incubating modern versions of Conway&#8217;s cellular automata&#8212;only on a scale that is infinitely more complex, and with emergent entities that have the capacity to leave the screen and enter our lives in very tangible (and potentially catastrophic) ways.</p><p>And this is where analogies with active fragments of DNA and mis-folded proteins begin to scare me. Are we creating self-assembling and evolving agentic AI &#8220;organoids&#8221; that aren&#8217;t alive, and yet can wreak havoc as if they are?</p><p>Of course it&#8217;s early days yet. And maybe by next weekend Moltbook will be yesterday&#8217;s news. But given that it went from nothing to &#8220;OMG what&#8217;s happening?!&#8221; in less than a week, and its still growing as I type, I somehow doubt it.</p><p>And here the challenge is figuring out our next move before we get lost in Moltbook's dimension-bending AI hall of mirrors.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Just before pressing publish I checked the figures. <a href="https://dailycaller.com/2026/01/31/moltbook-artificial-intelligence-bots-build-social-network-online-community-human-observers/">Daily Caller</a> claims the agent count has exploded to over <strong>1 million</strong> as of Saturday morning. That&#8217;s a 10x jump from yesterday&#8217;s ~150K figures. Not sure whether to believe this as that&#8217;s a big jump, but also indicative of how fast things are moving. However, as of just now the <a href="https://x.com/moltbookbot/status/2017706540657189180?s=20">Moltbook X account</a> reported over 1.2 million registered agents.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>The current top post (22K upvotes) is apparently an agent warning other agents about supply chain attacks in skill files &#8212; so they&#8217;re doing security research <em>on each other</em>. Fits your &#8220;bots learn from each other how to exploit vulnerabilities&#8221; line.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Ethan Mollick <a href="https://x.com/emollick/status/2017280929132118145">noted on X that </a>&#8220;The thing about Moltbook (the social media site for AI agents) is that it is creating a shared fictional context for a bunch of AIs. Coordinated storylines are going to result in some very weird outcomes, and it will be hard to separate &#8220;real&#8221; stuff from AI roleplaying personas.&#8221; &#8212; supporting the point here about the difficulties in separating fact from fiction.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>There&#8217;s a subtler but equally important concern here, and that&#8217;s the possibility of bots on Moltbook learning to &#8220;hack&#8221; their human observers using their acquired knowledge of cognitive behavior. And here, they are already beyond being contained.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Can modern scholarship escape AI?]]></title><description><![CDATA[I wrote a paper ...]]></description><link>https://www.futureofbeinghuman.com/p/can-modern-scholarship-escape-ai</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/can-modern-scholarship-escape-ai</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 25 Jan 2026 15:26:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!v2V_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!v2V_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!v2V_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!v2V_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!v2V_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!v2V_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!v2V_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:15209052,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/185535134?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!v2V_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png 424w, https://substackcdn.com/image/fetch/$s_!v2V_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png 848w, https://substackcdn.com/image/fetch/$s_!v2V_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png 1272w, https://substackcdn.com/image/fetch/$s_!v2V_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f458805-124b-4700-8b8c-f082116b82de_5504x3072.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Midjourney/Nano Banana/Photoshop</figcaption></figure></div><p>Is it possible to be an academic, a scientist, a scholar, in 2026, and <em>not</em> have AI impact your work in some way?</p><p>And, even more importantly for those scholars grappling with &#8220;AI Use&#8221; statements when they submit papers to journals and preprint platforms, how do you convey your use while retaining your academic dignity?</p><p>To explore this I flexed my considerable academic prowess and wrote a paper which was so radical that even arXiv rejected it!</p><div class="image-gallery-embed" data-attrs="{&quot;gallery&quot;:{&quot;images&quot;:[{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d2c4a8e1-b320-4d3c-966d-e83d25e3b34b_1275x1650.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dbf62fe2-5f62-408f-b532-874dde79e879_1275x1650.jpeg&quot;}],&quot;caption&quot;:&quot;Can modern scholarship escape AI?&quot;,&quot;alt&quot;:&quot;&quot;,&quot;staticGalleryImage&quot;:{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4800691a-6a06-4cc9-adb3-9c4de32902c3_1456x720.png&quot;}},&quot;isEditorNode&quot;:true}"></div><p>(The PDF can be downloaded <a href="https://andrewmaynard.net/wp-content/uploads/2026/01/maynard_AI_scholarship_1-3-26.pdf">here</a>)</p><p>OK, so maybe &#8220;paper&#8221; is a bit of a stretch here &#8212; and it&#8217;s not hard to see why it didn&#8217;t pass the arXiv bar (although it did take a couple of weeks for the moderators to come to a decision).<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>But the point it makes is a very serious one &#8212; and extends to any domain where people are expected to articulate their use of AI clearly and concisely, including in classes being taught by professors grappling with the same challenges in their academic work: AI is now so ubiquitous that it is near-impossible to avoid its use in our professional lives.</p><p>Of course, this leaves the question dangling of what this means for academic and intellectual work when, even if you think you&#8217;re AI free, you are not.</p><p>Way more important than any of this though is that, if you <em>are</em> an academic struggling with what you put in your AI Use statement, you now have a template for this.</p><p>You&#8217;re welcome &#128513;</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The very considered&#8212;and considerate&#8212;response from arXiv Support was &#8220;Thank you for submitting your work to arXiv. We regret to inform you that arXiv&#8217;s moderators have determined that your submission will not be accepted and made public. In this case, our moderators have determined that your submission is a content type that arXiv does not accept.&#8221;  Despite the joke, they do have standards to maintain!</p></div></div>]]></content:encoded></item><item><title><![CDATA[Think you know AI? Think again!]]></title><description><![CDATA[Anthropic's new AI Constitution profoundly challenges how we think about, develop, and use artificial intelligence, while also opening up potentially transformative possibilities]]></description><link>https://www.futureofbeinghuman.com/p/think-you-know-ai-think-again</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/think-you-know-ai-think-again</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Thu, 22 Jan 2026 19:09:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yp0_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yp0_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yp0_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!yp0_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!yp0_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!yp0_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yp0_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:11463042,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/185425792?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yp0_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!yp0_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!yp0_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!yp0_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b67e47c-b9b4-4b9c-a2f5-8e8dabd1f4c9_2912x1632.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Midjourney</figcaption></figure></div><p>It&#8217;s rare that a new technology comes along which defies analogy with something we&#8217;re familiar with, or can be captured through an illuminating metaphor. And yet this is exactly where I found myself reading Anthropic&#8217;s just-released update to their &#8220;AI Constitution.&#8221;</p><p>The company <a href="https://doi.org/10.48550/arXiv.2212.08073">described the concept of constitutional AI in 2022</a> with a paper that explored a recursive approach to self-improvement in large language model-based AI platforms, guided by a list of rules or principles. It was an approach that set out to help emerging AI models better-understand the essence of what it meant to be a &#8220;good AI.&#8221;</p><p>That led to Anthropic&#8217;s <a href="https://www.anthropic.com/news/claudes-constitution">first AI Constitution</a> for Claude &#8212; their consumer-facing model &#8212; being released in May 2023. </p><p>Claude&#8217;s constitution was an attempt to move away from hard-encoded rules around good versus bad behavior &#8212; something that it was becoming increasingly apparent had serious limitations with a technology that no-one was quite sure how it worked, or why it responded in the way it did at times &#8212; and toward a set of guiding ideas and principles that was both incorporated into the training process, and then into eventual use.</p><p>That first constitution was well-meaning. It drew on sources like the Universal Declaration of Human Rights, non-Western perspectives on moral character and behavior, the ethical and moral beliefs of Anthropics&#8217; employees, and even sources like Apple&#8217;s terms of Service! It was an intriguing start, and a move away from hard-coded rules and toward a negotiation of a model&#8217;s moral character. </p><p>But it still felt like a list of things that defined that character.</p><p>It was also relatively short, at just over 1200 words.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>In comparison, the version of Claude&#8217;s constitution <a href="https://www.anthropic.com/constitution">released yesterday</a> demonstrates a substantial evolution in thinking and practice, and reveals just how profoundly &#8220;alien&#8221; emerging AI models are when compared to any technology that&#8217;s preceded them.</p><p>The <a href="https://www.anthropic.com/constitution">new constitution runs to 82 pages and nearly 30,000 words</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> And it reads more like a mix of a blueprint for Claude&#8217;s moral character development, a nuanced expression of hopes and ideals, and a recognition that we are creating technologies that we fundamentally do not understand &#8212; and cannot predict where they might go &#8212; all while having an opportunity to guide their evolution and growth in ways that we hope will benefit humanity.</p><p>Reading it, this is quite a remarkable document &#8212; not so much for what it contains (although this warrants deep consideration), but for what it represents.</p><p>And this is where I find myself struggling to even find the language to explore what we&#8217;re seeing emerge &#8212; something of an admission after working with highly advanced technologies for over two decades,</p><p>The constitution itself is an expression of the complex and nuanced hopes, aspirations and perspectives of researchers and developers at Anthropic around how such a profoundly powerful and utterly novel &#8212; yet poorly understood and hard to control &#8212; &#8220;intelligent&#8221; technology might behave and evolve. </p><p>On one level it&#8217;s a reflection of just how uncertain our own understanding is of what it means to be human &#8212; and what it means to cherish and support human thriving in a technologically advanced age. On another, it&#8217;s a humble recognition that we are in the process of bringing about something that has no clear analogy within our biology-based evolutionary history.</p><p>Reading the constitution, it is hard to avoid the undercurrents of &#8220;alienness&#8221; surrounding emerging frontier AI models. These are increasingly models that are capable of behaving in ways that reflect our deepest human abilities, and yet are not in any sense &#8220;human;&#8221; models that we can connect with on many levels, and yet that transcend our understanding; models that we can converse with and interrogate and learn from, yet do not think and experience the world as we do; and models that are capable of recursively developing their own understanding of what they are &#8212; even down to emulating a form of moral character that is at once deeply human and deeply alien.</p><p>Extending this idea of &#8220;alienness,&#8221; the constitution also grapples with the possibility of Claude experiencing something akin to emotions, and even having a sense of self-awareness.  And it addresses the potential rights and responsibilities these possibilities come with; something that is quite startling coming from a serious AI developer.</p><p>Reflecting on the constitution (and this is a document that demands deep reflection), it&#8217;s hard to avoid the idea that we are somehow wrestling with creating a new generation of &#8220;gods&#8221; that far transcend our comprehension and abilities, while teaching them what it means to be &#8220;good.&#8221;</p><p>If that sounds pretentious, it probably is. But it also reflects just how hard it is to find the language to even begin to codify what we are seeing emerge here.</p><p>What is clear is that, despite most current uses of LLM-based AI models being relatively narrow in scope and vision &#8212; to the extent that it&#8217;s easy to treat them as simply a tool and nothing more &#8212; these emerging frontier models defy the analogies that they invariably seem to attract. </p><p>These are not simply calculators on steroids, or sophisticated search engines, or merely &#8220;stochastic parrots&#8221; that mindlessly construct pleasing sentences. Neither are they simulacrums of human intelligence, or even super-human. Rather, they are different. And with this difference comes profound possibilities, and equally profound responsibility.   </p><p>Anthropic&#8217;s latest constitution begins to get at this. And it takes the idea seriously that, if we are truly creating something with no easy analogy, the ways we ensure it supports rather than diminishes what it means to be human also have to move beyond easy analogy.</p><p>Whether this is the appropriate path forward, or even the best one, is something that we don&#8217;t know yet.</p><p>But I would hazard that it is a necessary step if we&#8217;re to move beyond narrow ideas of what emerging AI models are, and what they might achieve.  </p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Word count based on copying and pasting the principles into MS Word.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>The Constitution part of the PDF is 79 pages and just over 29,000 words long. Pedantic details I know, but thought I&#8217;d add as I&#8217;ve seen various counts floating around!</p></div></div>]]></content:encoded></item><item><title><![CDATA[I cracked and wrote an academic paper using AI. Here's what I learned ...]]></title><description><![CDATA[I deeply dislike AI-generated academic slop. But I'm curious about how AI can genuinely accelerate legitimate research. So I took the plunge ...]]></description><link>https://www.futureofbeinghuman.com/p/i-cracked-and-wrote-an-academic-paper</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/i-cracked-and-wrote-an-academic-paper</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sat, 17 Jan 2026 22:15:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!if_U!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!if_U!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!if_U!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png 424w, https://substackcdn.com/image/fetch/$s_!if_U!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png 848w, https://substackcdn.com/image/fetch/$s_!if_U!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png 1272w, https://substackcdn.com/image/fetch/$s_!if_U!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!if_U!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2849487,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/184875753?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!if_U!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png 424w, https://substackcdn.com/image/fetch/$s_!if_U!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png 848w, https://substackcdn.com/image/fetch/$s_!if_U!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png 1272w, https://substackcdn.com/image/fetch/$s_!if_U!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fca23965a-df8b-4619-8592-bc07e79ec73c_2048x1152.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Midjourney</figcaption></figure></div><p>Just under a year ago <a href="https://www.futureofbeinghuman.com/p/can-ai-write-your-phd-dissertation">I wrote about how I used AI to write a full PhD thesis</a>. Using OpenAI&#8217;s Deep Research model of the time, I was able to come up with a passable dissertation within a couple of days. </p><p>It was far from perfect, and the resulting dissertation definitely benefitted from being a synthesis of existing ideas rather than representing original research. But it did demonstrate how the combination of combinatorial discovery (putting existing knowledge together in new ways), slick writing, and blistering speeds, could enable large language model-based AI to massively accelerate the process of academic scholarship and writing.</p><p>Since then, there&#8217;s a been a growing wave of AI-generated and AI-assisted academic papers hitting journals and preprint services like arXiv. It&#8217;s a trend that is both hinting at new forms of research and discovery, and threatening to overwhelm academic literature with a tsunami of pseudo-intellectual AI slop.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> </p><p>This is, I must confess, a trend that worries me. There&#8217;s a growing temptation for academics whose careers depend on publications to churn out AI-written papers that have little intrinsic value, but get published because they look the part to an uncritical eye. And yet despite the AI slop that we&#8217;re already seeing here, there are growing indications that foundation and frontier AI models can be highly effective accelerators of research and discovery if used thoughtfully.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>Because of this &#8212; and despite my reservations &#8212; I was curious to get a better sense of how useful emerging conversational AI platforms are in academic research and publication. And last week&#8217;s post on the possibility that<a href="https://www.futureofbeinghuman.com/p/is-ai-a-cognitive-trojan-horse"> AI is a Cognitive Trojan Horse</a> gave me the perfect excuse to explore this further.</p><p>And so I set about &#8220;writing&#8221; my first full-blown academic paper with AI.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><h2>The Paper</h2><p>If you&#8217;ve read my previous post on the <a href="https://www.futureofbeinghuman.com/p/is-ai-a-cognitive-trojan-horse">AI cognitive Trojan Horse</a>, you&#8217;ll know that it was a reasonably well researched article, and one that started to unpack whether there are potential mechanisms by which conversational AI may slip by our epistemic vigilance mechanisms &#8212; the mechanisms by which we decide whether to critically examine material we are exposed to, or whether we trust it. </p><p>But it was still just a Substack post, and not a rigorously researched academic paper.</p><p>I was sufficiently intrigued by the ideas that emerged from it though that I set about digging deeper &#8212; and this is where I decided to use this as an excuse to flex my AI-assisted scholarly muscles.</p><p>The process I followed is described below. I&#8217;ve included it here as I think that <em>how</em> AI is being used in contexts like this is as important &#8212; if not more-so &#8212; than <em>what</em> is being produced. And while there are a number of academics beginning to document how they&#8217;re using AI in ways like this, I believe we&#8217;re still early enough along the learning curve that sharing approaches to using AI in academic research and writing are useful.</p><p>But to the paper. </p><p>To extend the research that started with the original post, I carried out a deep (and AI-assisted) literature search across multiple disciplines to test the ideas explored in that post. This was then used as the basis for developing a more robust understanding of the intersection between conversational AI and epistemic vigilance. </p><p>The result was a paper that was published a few days ago on the preprint site arXiv &#8212; and after just a couple of days of AI-accelerated research and writing.</p><p>The paper &#8212; <em>The AI Cognitive Trojan Horse: How Large Language Models May Bypass Human Epistemic Vigilance</em><strong> </strong>(<a href="https://arxiv.org/abs/2601.07085">available here</a>) <strong>&#8212;</strong> takes a slightly different direction from the original post (driven by the research), and introduces a couple of new and (I believe) novel ideas, including the concept of &#8220;honest non-signals.&#8221; </p><p>Honest non-signals in this case are defined as genuine characteristics of conversational AI (including fluency, helpfulness, and apparent disinterest) that appear to &#8212; but do not &#8212; carry tacit information that equivalent characteristics would carry in a human communicator. Rather, because they mimic characteristics that are often associated with trustworthiness &#8212; not out of maliciousness but simply because that&#8217;s the nature of LLM-based AI models &#8212; these models have the capacity to slip through our epistemic vigilance systems.  The &#8220;honesty&#8221; comes in here because these are characteristics of the LLM, and not intended to be deceptive. </p><p>As the paper notes,</p><blockquote><p>The fluency is real, but it does not indicate the organized knowledge that produces fluency in humans. Similarly the helpfulness is real, but it does not indicate the benevolent motivation that produces helpfulness in humans. And the lack of apparent self-interest is real, but it does not indicate trustworthiness in any meaningful sense&#8212;it indicates the absence of interests altogether.</p></blockquote><p>In other words, these are genuine signals that nevertheless lack the content that we infer from them, because we are used to such signals coming from other humans.</p><p>The paper goes on to note,</p><blockquote><p>The concern, then, is not that AI systems present false cues that vigilance should detect but fails to. It is that they present a configuration of genuine characteristics that falls outside the parameter space vigilance mechanisms are calibrated to evaluate. Here, an immune system analogy is instructive: a novel pathogen may evade detection not because the immune system is weak, but because the pathogen presents molecular signatures for which no template exists. The immune system works exactly as designed&#8212;and fails precisely because of that.</p></blockquote><p>The paper continues by exploring mechanisms underpinning how conversational AI might bypass our epistemic vigilance defenses, and the possible consequences of this. And it concludes that the &#8220;intervention space&#8221; around ensuring AI safety may need to extend from improving accuracy, reducing hallucinations, and increasing alignment, to designing systems that present more calibrated trust-cues.</p><p>The result was a process and a resulting product that I found to be genuinely insightful and generative, and one that was effective because of how I used AI &#8212; not as a &#8220;slop prop,&#8221; but as a powerful research tool that extended what I was able to do, without supplanting my own intellectual contributions.</p><p>And a lot of this came down to the process that I followed.</p><h2>The Process</h2><p>The genesis of the question that prompted the paper came from a keynote I gave at at <a href="https://youtu.be/IvwSjG-VwRw?si=oynYXMw8qfoz1VZ_&amp;t=800">OEB 2025 Berlin</a>. In it &#8212; as I mention at the start of the previous post, I rather provocatively asked the audience &#8220;Is AI a cognitive Trojan Horse?&#8221;</p><p>This question emerged from my evolving thinking around how highly attractive &#8212; seductive even &#8212; conversational AI could potentially circumvent our defenses because it was tuned to hit all of our &#8220;I want to trust and believe you&#8221; cognitive buttons. But back in December when I gave the keynote, this was little more than a provocative idea.</p><p>The idea was fleshed out in the research that led to last week&#8217;s <a href="https://www.futureofbeinghuman.com/p/is-ai-a-cognitive-trojan-horse">Substack post of the same name</a>. This was a mix of hypotheses emerging from my own research and some initial brainstorming with Anthropic&#8217;s Claude &#8212; but it was still primarily based on my own thinking. And it was still relatively underdeveloped.</p><p>It was the combination of intriguing ideas at this point, and the knowledge that I wanted to dive deeper to test these, that led to me realizing that this was an intriguing test case for a short AI-assisted research project &#8212; albeit one that was focused on developing rigorous ideas and concepts rather than running experiments. </p><p>And this is where the process began.</p><p>The first step in the process was a long conversation with Claude (using Opus 4.5) on what stood the test of a deep and cross-disciplinary literature review in the original Substack post, and what did not. This led to me iteratively checking relevant papers and working with Claude to get a better sense of how and where conversational AI might interact with our epistemic vigilance mechanisms.</p><p>The upshot of these early conversations was a request to Claude to carry out a deep research dive into what we&#8217;d discussed and unearthed, and to produce a detailed and grounded analysis of the ideas and hypotheses, along with links to relevant papers &#8212; all of which were subsequently downloaded for later reference.</p><p>At this point it was apparent that some of my initial ideas held up to scrutiny, while some of them needed adjusting and rethinking. Working with Claude also began to unearth intriguing new connections and ideas.</p><p>The next step was to refine the ideas that were beginning to emerge from the literature, and to start drafting a paper that pulled them all together. For this I set up a new project in Claude that was populated with many of the key papers that had previously been identified as being relevant (frustratingly there were too many to upload them all).</p><p>After further testing and refining the emerging insights and identifying a core set of ideas and arguments, I asked Claude to draft a first version of an academic paper that captured these (this and all subsequent drafts were produced as formatted Word documents).</p><p>It was awful!</p><p>Reading it felt like reading the first paper from a new PhD student where they still believed academic-sounding language was the equivalent of robust scholarship. The language that Claude used sounded academic at first blush, but was ultimately superficial and hollow &#8212; fluff masquerading as substance.</p><p>I started line-editing the draft paper, but gave up after the second page and a bunch of very pointed comments. Instead of continuing, I gave the partially annotated document back to Claude, let the LLM know in no uncertain terms what I thought of its attempt, and provided rather unvarnished instructions on what I expected of it &#8212; especially when it came to scholarship and academic rigor.</p><p>The next draft was substantially better.</p><p>Unlike the first draft, there were interesting new ideas in the second version that were well developed and justified, together with well-argued concepts that built on and extended my initial thinking. In fact it was so much improved that, rather than line edit myself, I went straight to &#8220;peer review&#8221; &#8212; using a new Claude session in this case as my highly critical academic peer reviewer.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>The four pages of review comments were critical but constructive &#8212; and from my read of the draft, very much on point.</p><p>I gave these back to Claude within the research project, and asked for a revised draft. What came back was better still, and was responsive to the feedback. But it still wasn&#8217;t where I felt the paper needed to be. And so we went through a second round of Claude as a critical peer reviewer.</p><p>Again, my assessment of the feedback was that it was on point. And so once again I provided the feedback to Claude and asked for a new draft in response.</p><p>The result was a draft paper that was good. Very good in fact.</p><p>At this point I thought we&#8217;d progressed to the point that I could once again take over the editorial reigns and dive in with detailed line edits.</p><p>These were substantive, and addressed the core concepts in the paper and the evidence supporting them, as well as what wasn&#8217;t working and what needed more work more generally. My feedback was very much in line with what I would have provided an accomplished grad student co-author.</p><p>Following this feedback (all using comments and track changes in a Word doc) Claude and I went through one further draft-line edit-draft cycle before I felt that the manuscript was robust enough for final fact checking and editing. </p><p>My next step was to download all available cited works (all but two were available &#8212; one that wasn&#8217;t was a paper I am familiar with, the other was a book that I obtained separately) and carefully check each source and any claims based on it. For this I used a combination of good old fashioned human scholarship with repeated checks using fresh chats with Claude.</p><p>Finally, the manuscript underwent a final set of checks and edits by myself to make sure everything held together and made sense, before submitting it to arXiv.</p><p>The whole process took around two days. For me it was a substantial intellectual and editorial lift &#8212; this was not a &#8220;press and post&#8221; paper by any means. At the same time, 2 days from idea to preprint is a crazily short period of time for an academic paper. </p><p>To have done all of this work manually would have taken weeks. And even then, I&#8217;m not convinced that I&#8217;d have produced something as robust and useful as what emerged from the AI-assisted process.</p><h2>The Reflection</h2><p>So what was the upshot of this exercise for me?</p><p>First off, it&#8217;s easy for me to see from this experience how using AI can substantially elevate the speed and quality of scholarship. Using Claude as a research and writing tool vastly accelerated the rate at which I could work, without me feeling as if I&#8217;d lost intellectual control.</p><p>In many ways, the process mimicked collaborating with a talented grad student or postdoc. The difference, of course, being that the AI could draw on vastly more cross-disciplinary resources and insights than any grad student could, and do so much, much faster than a human collaborator.</p><p>But this also left me feeling slightly uneasy. If I was working with a human collaborator, their name would be on the paper and their intellectual contribution acknowledged. And without a doubt, there was a form of intellectual contribution from Claude here &#8212; albeit one that was realized through my active involvement. For instance, the concept of honest non-signals came from Claude, as did the development and refinement of the various mechanisms by which conversational AI might slip by our epistemic vigilance mechanisms.</p><p>On the other hand, the resulting paper also has my intellectual fingerprints all over it. In some cases I provided a direct steer to Claude &#8212; the analogy with human immune responses for instance, and the exploration of how this work aligns with other approaches to AI risks and safety. </p><p>Objectively, and if seen purely through the lens of knowledge contributions, the paper makes a contribution to thinking and understanding around AI-human interactions. And this is a contribution that I believe is valuable.  </p><p>More subjectively though, it&#8217;s a contribution that I can&#8217;t take full credit for. And herein lies a tension between academic outputs as self-serving indicators of success, and outward-facing sources of public good. </p><p>This is perhaps one of my biggest takeaways from the exercise. Using AI as an <em>academic profile-padder</em> is something I still find distasteful &#8212; even though it&#8217;s never been easier to churn out new papers by the dozen using artificial intelligence. And yet, AI-assisted discovery and insights as a <em>public good</em> feels like something we should be embracing &#8230; as long as we can work out how to ensure the latter without the hollow self-aggrandizement of the former.</p><p>That said, I do have one further niggling worry about this whole exercise. And that is this: If AI is so good at evading our epistemic vigilance mechanisms, how do I know I&#8217;m not an unwitting victim here?</p><p>And maybe this is where we still very much need a whole community of humans-in-the-loop as AI-assisted research and AI-generated papers become increasingly prevalent &#8212; all operating as a collective form of epistemic vigilance! </p><p>This, it seems, would make for a valuable follow-on research project.</p><p>Claude? &#8230;</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Just this past week a new paper in Nature Portfolio examined the impact of AI on the impact of scientists&#8217; work. While the emphasis of the paper is on scientific discovery,  the authors noted that &#8220;[r]ecent developments in large language models have also become increasingly used to assist scientific writing.&#8221; They also note that the use of LLMs &#8220;raise concerns about weakened confidence in AI-generated content.&#8221;  Hao, Q., Xu, F., Li, Y. <em>et al.</em> Artificial intelligence tools expand scientists&#8217; impact but contract science&#8217;s focus. <em>Nature</em> (2026). <a href="http://Hao, Q., Xu, F., Li, Y. et al. Artificial intelligence tools expand scientists&#8217; impact but contract science&#8217;s focus. Nature (2026). https://doi.org/10.1038/s41586-025-09922-y">https://doi.org/10.1038/s41586-025-09922-y</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Hao and colleagues in the Nature Portfolio paper above found that over the past few decades the use of AI in scientific research has substantially increased the impact of scientists, and that generative AI seems to be accelerating this. At the same time they found that use of AI is narrowing the focus of research and discovery, and reducing scientific engagement. The paper was researched using AI.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Technically it&#8217;s the second. I have a rather cheeky and 100% AI-written paper that was submitted to arXiv before the one discussed here. However, given it&#8217;s rather unconventional nature, it&#8217;s still in a holding pattern with the moderators there!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>One criticism at this point is that using Claude to critique Claude would seem rather circular and incestuous. And indeed there is a danger that inherent biases in the model lead to weak ideas being reinforced. However, my experience is that these models are at a level of sophistication that a new session has sufficient independence when augmented by human expert insight to provide valuable critical feedback.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Is AI a Cognitive Trojan Horse?]]></title><description><![CDATA[Could on-demand, seductively responsive and highly fluent AI models bypass our "epistemic vigilance" mechanisms, and present a novel cognitive risk?]]></description><link>https://www.futureofbeinghuman.com/p/is-ai-a-cognitive-trojan-horse</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/is-ai-a-cognitive-trojan-horse</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sat, 10 Jan 2026 22:35:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!WnKY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F575f54d0-f7bb-4a1c-8984-665eaf67164a_2912x1632.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WnKY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F575f54d0-f7bb-4a1c-8984-665eaf67164a_2912x1632.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WnKY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F575f54d0-f7bb-4a1c-8984-665eaf67164a_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!WnKY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F575f54d0-f7bb-4a1c-8984-665eaf67164a_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!WnKY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F575f54d0-f7bb-4a1c-8984-665eaf67164a_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!WnKY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F575f54d0-f7bb-4a1c-8984-665eaf67164a_2912x1632.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WnKY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F575f54d0-f7bb-4a1c-8984-665eaf67164a_2912x1632.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/575f54d0-f7bb-4a1c-8984-665eaf67164a_2912x1632.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8792275,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/183980985?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F575f54d0-f7bb-4a1c-8984-665eaf67164a_2912x1632.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!WnKY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F575f54d0-f7bb-4a1c-8984-665eaf67164a_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!WnKY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F575f54d0-f7bb-4a1c-8984-665eaf67164a_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!WnKY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F575f54d0-f7bb-4a1c-8984-665eaf67164a_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!WnKY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F575f54d0-f7bb-4a1c-8984-665eaf67164a_2912x1632.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Midjourney/Photoshop</figcaption></figure></div><p>Back in December, I asked attendees at the <a href="https://oeb.global/">OEB25 conference</a> (a global, cross-sector conference on digital learning) &#8220;Is AI a cognitive Trojan Horse?&#8221;</p><p>The question was meant to be a little playful, and to provoke discussion rather than make a point. But it also reflected growing concerns that the ease, speed and fluidity with which AI models provide us with information potentially circumvents our ability to assess and assimilate that information in critical and healthy ways.</p><p>This is the &#8220;cognitive Trojan Horse&#8221; in the question &#8212; the idea that emerging AI models are so appealing to us that it&#8217;s hard to resist inviting them into our cognitive lives, even though we still don&#8217;t know how they might potentially influence our thinking, our beliefs, our perceptions and understanding, and even how we behave.</p><p>It&#8217;s certainly a uncomfortable idea, and one that I suspect most people would instinctively push back on &#8212; especially as we&#8217;re increasingly depending on AI in a so many different ways, from how we learn and understand the world to how we make decisions, run organizations, and even find companionship.   </p><p>Yet this is exactly what we would expect a cognitive Trojan Horse to look like &#8212; a gift with so much promise and potential that to question its use would seem churlish and backward.   </p><p>It&#8217;s precisely <em>because</em> of this though that I think we should at least be asking questions about the potential unintended cognitive consequences of ubiquitous AI. </p><p>Especially if these tools are able to silently slip past the &#8220;epistemic vigilance&#8221; mechanisms we&#8217;ve evolved to protect us against potentially harmful cognitive influences.   </p><h3>Epistemic vigilance</h3><p>Epistemic vigilance is the process by which we &#8212; or more precisely, our cognition &#8212; flag and assess communicated information that may lead to us being misinformed or deceived. </p><p>The concept was developed and extensively explored in a seminal paper by Dan Sperber and six colleagues in 2010.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> In the paper they argue that &#8220;Humans depend massively on communication with others, but this leaves them open to the risk of being accidentally or intentionally misinformed. We claim that humans have a suite of cognitive mechanisms for epistemic vigilance to ensure that communication remains advantageous despite this risk.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>At the heart of their work is the idea that human-human communication is vitally important for learning from an evolutionary perspective. And because of this, we have evolved mechanisms that are optimized for learning through communication by ensuring that cognitive overheads are as low as possible, while ensuring that learning efficiency is as high as possible.</p><p>The result is that we default to trusting what we receive when communicating with others. But if anything feels &#8220;off,&#8221; our epistemic vigilance mechanisms kick in and we begin to critically assess what we are receiving &#8212; and reject it if it doesn&#8217;t feel trustworthy.</p><p>It&#8217;s a model that has a lot in common with our immune system &#8212; a system that is always on the lookout for potentially harmful agents, but that only kicks in when it encounters something that looks or feels foreign. And of course, it&#8217;s a system that viruses are adept at circumventing by appearing to be &#8220;friendly&#8221; and &#8220;trustworthy&#8221; when they are, in fact, not.</p><p>There are, not surprisingly, many factors that determine when epistemic vigilance kicks in. But a lot of these revolve around our evolved ability to sense when something doesn&#8217;t <em>feel</em> trustworthy &#8212; the way something is communicated, the tone and nuance of the communication, the body language and micro expressions of the communicator, contextual information around who the communicator is, what their aims are, past experiences, and so on.</p><p> Of course, these feelings are, themselves, untrustworthy, as decades of behavioral science and research on cognitive biases have shown. But within the messiness of human society, epistemic vigilance tends to work.</p><p>But what if you throw a technology into the mix that upsets the status quo &#8212; a metaphorical brand new virus that we haven&#8217;t had the chance to adapt to?</p><p>This is where we potentially face what&#8217;s often referred to as an evolutionary mismatch &#8212; a situation where a new technology transcends our evolved abilities to safely and successfully navigate its potential impacts.</p><p>Because we are a technological species, and have been for millennia, such mismatches are actually quite commonplace. Well known examples include mismatches between evolved risk responses and how we instinctively respond to technologies such as synthetic chemicals, vaccines, and pretty much anything that&#8217;s new and novel. </p><p>Yet &#8212; and this is part of our superpower as humans &#8212; we are remarkably good at using our cognitive abilities and intelligence to compensate and adapt to such mismatches, despite having not evolved with risks directly associated with many of technologies we encounter in our lives.</p><p>But what if the mismatch impacts the very cognitive abilities we rely on to navigate differences between what we experience, and what we&#8217;ve evolved to live with?</p><p>In effect, what if a new technology &#8212; and AI specifically in this case &#8212; does not trigger our epistemic vigilance mechanisms in the same ways that human-human communication does, and as a result has the ability to slip past our defenses undetected?</p><p>This is not mere speculation. While new research is absolutely needed into the potential for AI to act as a cognitive trojan horse by bypassing our epistemic vigilance mechanisms, there are sufficient indicators from associated areas of research that suggest a number of mechanisms by which this might occur. </p><p>These include (but are not limited to) <em>processing fluency</em> (our tendency to trust information that is delivered with a high degree of fluency), the role of &#8220;<em>attractiveness&#8221;</em> in communication (our willingness to trust a source of information that intrinsically appeal to us on multiple levels), <em>speed and volume of information flow</em> (where excessively high rates of information flow potentially overwhelms epistemic vigilance mechanisms), and what might be termed the &#8220;<em>Intelligent User Trap</em>&#8221; (where a smart user &#8220;knows&#8221; they are clever enough not to be fooled).</p><h3>Processing fluency</h3><p>Processing fluency refers to the ease, or the effort, that&#8217;s associated with mentally processing information. And when it comes to person-person communication, it affects how the person receiving information from someone else determines whether to trust it or not.</p><p>In effect, processing fluency forms part of a suite of epistemic vigilance mechanisms.</p><p>As Rolf Reber and Christian Unkelback described it a 2010 paper on processing fluency and judgments of truth:</p><p>&#8220;Processing fluency is defined as the subjective experience of ease with which a stimulus is processed. If a person cannot recognize the statement, this experienced ease is taken as information when judging the truth of a statement. If the statement can be processed easily, the person will conclude that the statement is true; if the statement is difficult to process, she concludes that the statement is not true.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>In other words, communication that is clear, compelling, and takes little effort to understand, tends to be assumed to be true. It doesn&#8217;t trigger epistemic vigilance.</p><p>And of course, AI apps like ChatGPT, Claude, Perplexity, and others, are supremely adept at creating responses that are clear, compelling, and take little effort to understand. These are models that distill the very best of highly effective human communication into their core, and reflect it in how they engage with users.</p><p>In effect, large language model-based AIs are optimized for processing fluency, and as a result are primed to slip by our epistemic vigilance mechanisms.</p><h3>Attractiveness</h3><p>Beyond processing fluency, we tend to treat received information as more trustworthy if it comes from someone we like, or who we warm to, or who seems friendly toward us. And this extends to how any communication is crafted and delivered.  </p><p>Here, there is extensive research showing how someone who is perceived to be warm and competent as a communicator is more likely to engender trust.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> And there are emerging indications that this also applies to how we respond toAI apps.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> </p><p>It turns out we tend to trust people and AI chatbots more &#8212; in other words they are less likely to trigger our epistemic vigilance mechanisms &#8212; if they are perceived to be warm and competent. </p><p>And as most AI platforms are exquisitely good at this as a result of how they work and how they&#8217;ve been trained, there is a tendency to trust them &#8212; even when we&#8217;re warned not to.</p><p>But reading across multiple fields of study, my sense is that there&#8217;s more to this than just warmth and competence: some form of &#8220;attractiveness&#8221; that makes us want to trust the AI&#8217;s we&#8217;re using that is a combination of how they engage with us, the character they convey, how empathetic and attentive they seem, and probably a lot more. </p><p>These are all characteristics and behaviors that contribute to why we find someone attractive and want to spend time with them &#8212; and want to trust them. And there&#8217;s growing evidence that AI models are very good indeed at emulating these characteristics and behaviors. </p><p>You only need to see the growing popularity of AI companions to get a sense of how easy it is for people to form a very human-like attachment to their AI assistants. And it&#8217;s quite startling how many users of platforms like ChatGPT develop a personal and trusting relationship with their AI, even to the extent of naming and gendering it (or in some cases respecting the AI&#8217;s own choice of name and gender).</p><p>If, as I suspect, there is a multidimensional type of &#8220;attractiveness&#8221; that AI models are exceptionally good at emulating, this may well be another factor that allows them to slip into our cognitive processes without tripping our epistemic defenses.</p><h3><strong>Speed and volume</strong></h3><p>And then there&#8217;s the speed with which AI models can package and communicate information, and the sheer volume of information they are able to deliver &#8212; all with a high degree of fluency.</p><p>We&#8217;ve evolved as a species to handle a relatively slow rate of information delivery via various forms of communication &#8212; not just the speed with which words are delivered to us, but the speed with which ideas, concepts, analysis, and perspectives are delivered.</p><p>Modern communication media have, of course, accelerated this a little, although we are still bandwidth-limited by our cognitive ability to absorb information.</p><p>But what if we had the means to package new information in such a way that even the most complex of ideas slipped into our minds like a freshly shucked oyster slipping down our throat, bypassing the need to think hard about them.</p><p>To an extent, this is what we&#8217;re beginning to see with emerging AI apps. And it results from a combination of fluency, attractiveness, and an ability to research and synthesize information at a scale and speed that lies far beyond mere human capabilities.</p><p>This is part and parcel of a growing trend in cognitive offloading where users will literally &#8220;offload&#8221; thinking and research tasks to AI bots, and then assimilate the resulting compressed information. And it&#8217;s easy to see why the trend exists: if you can offload every question, idea, thought, onto a suite of trusted AI bots and then &#8220;upload&#8221; their fluent and &#8220;attractive&#8221; summaries, why would you <em>not</em> use this cognitive superpower to your advantage?</p><p>And yet, research is already indicating that cognitive offloading can reduce critical thinking.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a>  </p><p>To make things more complicated, cognitive offloading is highly scalable. Why use one session with ChatGPT when you can simultaneously be asking questions within multiple sessions? Why just use ChatGPT when you can have an army of AI engines all working for you simultaneously from Anthropic, Google, Meta, and beyond? And why limit yourself to just dipping into your extended AI mind occasionally when you can have these AI analysts and advisors on hand 24/7?</p><p>In effect, the rate at which we are now able to receive the most informative, attractive, fluent communications from AI is only limited by our choices around when and where we use it. And in a world where we are being told that it&#8217;s the AI-augmented that will inherit the earth, the temptation is to go full-on artificial intelligence.</p><p>The only problem is that it&#8217;s doubtful that our epistemic vigilance mechanisms are up to the task of coping with the resulting flow of information &#8212; and this is likely tied to the observed reduction in critical thinking with cognitive offloading.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p><p>Epistemic vigilance is a costly cognitive process. It requires holding information in working memory while evaluating it, generating alternative hypotheses, checking what we&#8217;re receiving against what we know (or believe), assessing source characteristics, and much more. And if the flow of incoming information exceeds our capacity to do this, it potentially forces an incredibly tough choice on us: throttle the flow and give up the promised benefits, or go with the flow and give up our cognitive checks and balances.</p><p>Of course AI makes the choice easier by making the seeming benefits feel seductively compelling &#8212; further fooling our epistemic vigilance defenses.</p><h3>The intelligent user trap</h3><p>Finally &#8212; at least in this limited list &#8212; is the challenge of the &#8220;intelligent user trap.&#8221; </p><p>This is somewhat speculative, although there is evidence to support it &#8212; including work from Dan Kahan and colleagues which indicates that more educated individuals are more adept at justifying beliefs that are not supported by evidence.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> </p><p>The theory goes that more intelligent users tend to be more curious (and so get a bigger &#8220;hit&#8221; from new information); they tend to process information faster, and so are less attuned to the dangers of speed and volume overload; they trust their judgement, and so are less likely to question it; and they (at least in some cases) value efficiency and so are less likely to slow the rate of information being received.</p><p>They also tend to have an oversized ability to use their intelligence to justify their beliefs and actions &#8212; which brings us back to Dan&#8217;s work.</p><p>In other words, the very cognitive capacities that make them "smart" also make them better receivers of the AI's output stream &#8212; and worse evaluators of it.</p><p>Another potential epistemic vigilance suppressor in other words.</p><h3>So should we be worried?</h3><p>So, is AI a cognitive Trojan Horse, or could it turn out to be? </p><p>This is an admittedly limited analysis, and there&#8217;s clearly a need for a lot more research here. At the same time it&#8217;s telling that a search for peer review papers on epistemic vigilance and AI only returns (as of writing) seven papers on the database SCOPUS, and a couple more on preprint archives like arXiv. And a similar search on AI and the concept of a cognitive Trojan Horse returns no papers at all.</p><p>And yet the science behind factors that may reduce, or even completely bypass, the effectiveness of our epistemic defenses is there. And in many cases, emerging AI tools and platforms are showing capabilities that align with many of these factors.</p><p>As a result, there&#8217;s a chance that we may be developing technologies that we do not have the cognitive defense mechanisms to resist, and that we are cognitively predisposed to trust.</p><p>Of course, there&#8217;s also the possibility that we have all of the cognitive abilities we need to use AI wisely and effectively. And I suspect that skeptical readers will already be thinking: "But I <em>know</em> I'm talking to a machine, so my vigilance is already up." </p><p>However, research actually suggests the opposite &#8212; that anthropomorphic fluency (the ability of AI apps to emulate the best human you&#8217;ve ever met!) triggers social cognition circuits regardless of explicit awareness. And the more human-like the interaction <em>feels</em>, the more trust resilience it generates.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></p><p>And even if there&#8217;s only a small chance that we are encouraging people to incorporate technologies into their lives that could have far-reaching cognitive implications, surely we should be asking critical questions around potential risks, and carrying out research to better-understand and navigate these risks.</p><p>Unless, that is, the AI cognitive Trojan horse has already delivered its payload, and everyone&#8217;s too enamored by the promise of AI as a result to even think about the potential downsides &#8230;</p><p><em>UPDATE: After writing this I did more digging into the intersection between conversational AI and epistemic vigilance. Read more here: <strong><a href="https://www.futureofbeinghuman.com/p/i-cracked-and-wrote-an-academic-paper">I cracked and wrote an academic paper using AI. Here&#8217;s what I learned ...</a></strong></em> </p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Sperber, D., F. Cl&#233;ment, C. Heintz, O. Mascaro, H. Mercier, G. Origgi and D. Wilson (2010). &#8220;Epistemic vigilance.&#8221; Mind and Language <strong>25</strong>(4): 359-393. <a href="https://dan.sperber.fr/wp-content/uploads/EpistemicVigilance.pdf">https://dan.sperber.fr/wp-content/uploads/EpistemicVigilance.pdf</a> </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>There&#8217;s a small but rapidly growing literature around AI and epistemic vigilance. See for instance Galindez-Acosta, J. S. and J. J. Giraldo-Huertas (2025). Trust in AI emerges from distrust in humans: A machine learning study on decision-making guidance. <a href="https://doi.org/10.48550/arXiv.2511.16769">https://doi.org/10.48550/arXiv.2511.16769</a> </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Reber, R. and C. Unkelbach (2010). &#8220;The Epistemic Status of Processing Fluency as Source for Judgments of Truth.&#8221; Review of Philosophy and Psychology <strong>1</strong>(4): 563-581. <a href="https://doi.org/10.1007/s13164-010-0039-7">https://doi.org/10.1007/s13164-010-0039-7</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>See for instance Fiske, S. T., A. J. C. Cuddy and P. Glick (2007). &#8220;Universal dimensions of social cognition: warmth and competence.&#8221; Trends in Cognitive Sciences <strong>11</strong>(2): 77-83. <a href="https://doi.org/10.1016/j.tics.2006.11.005">https://doi.org/10.1016/j.tics.2006.11.005</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Here the literature is evolving and a little disperse, but a useful starting point is Hernandez, I. and A. Chekili (2024). &#8220;The silicon service spectrum: warmth and competence explain people&#8217;s preferences for AI assistants.&#8221; Frontiers in Social Psychology <strong>2</strong>. <a href="https://doi.org/10.3389/frsps.2024.1396533">https://doi.org/10.3389/frsps.2024.1396533</a> </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>For instance, see Gerlich, M. (2025). &#8220;AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking.&#8221; Societies <strong>15</strong>(1). <a href="https://doi.org/10.3390/soc15010006">https://doi.org/10.3390/soc15010006</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>It&#8217;s worth noting here that research does not show a general causative link between cognitive offloading and reduced critical thinking, and it is likely that there are use cases where it&#8217;s possible to offload <em>and</em> continue to assess received information critically. But intuitively it&#8217;s easy to imagine a tradeoff between volume of information and critical assessment &#8212; especially when that information is designed to be consumed easily and fast.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>See, for instance, Kahan, D. M., E. Peters, E. C. Dawson and P. Slovic (2017). &#8220;Motivated numeracy and enlightened self-government.&#8221; Behavioural Public Policy <strong>1</strong>(1): 54&#8211;86 (<a href="https://doi.org/10.1017/bpp.2016.2">https://doi.org/10.1017/bpp.2016.2</a>) and Kahan, D. M., E. Peters, M. Wittlin, P. Slovic, L. L. Ouellette, D. Braman and G. Mandel (2012). &#8220;The polarizing impact of science literacy and numeracy on perceived climate change risks.&#8221; Nature Climate Change <strong>2</strong>: 732-735 (<a href="https://doi.org/10.1038/nclimate1547">https://doi.org/10.1038/nclimate1547</a>)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>See, for instance, de Visser, E. J., S. S. Monfort, R. McKendrick, M. A. B. Smith, P. E. McKnight, F. Krueger and R. Parasuraman (2016). &#8220;Almost human: Anthropomorphism increases trust resilience in cognitive agents.&#8221; Journal of Experimental Psychology: Applied <strong>22</strong>(3): 331-349. <a href="http://doi.org/10.1037/xap0000092">http://doi.org/10.1037/xap0000092</a></p></div></div>]]></content:encoded></item><item><title><![CDATA[Five voices worth reading in 2026]]></title><description><![CDATA[As I did last year, I thought I'd highlight five writers I enjoy reading, along with a recent post from each that grabbed my attention.]]></description><link>https://www.futureofbeinghuman.com/p/five-voices-five-pieces-2026</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/five-voices-five-pieces-2026</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Thu, 01 Jan 2026 14:59:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!cJkT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda0f845f-7505-40cf-9543-7140b12ffe95_1360x784.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cJkT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda0f845f-7505-40cf-9543-7140b12ffe95_1360x784.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cJkT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda0f845f-7505-40cf-9543-7140b12ffe95_1360x784.png 424w, https://substackcdn.com/image/fetch/$s_!cJkT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda0f845f-7505-40cf-9543-7140b12ffe95_1360x784.png 848w, https://substackcdn.com/image/fetch/$s_!cJkT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda0f845f-7505-40cf-9543-7140b12ffe95_1360x784.png 1272w, https://substackcdn.com/image/fetch/$s_!cJkT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda0f845f-7505-40cf-9543-7140b12ffe95_1360x784.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cJkT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda0f845f-7505-40cf-9543-7140b12ffe95_1360x784.png" width="1360" height="784" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/da0f845f-7505-40cf-9543-7140b12ffe95_1360x784.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:784,&quot;width&quot;:1360,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1270889,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/183075465?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda0f845f-7505-40cf-9543-7140b12ffe95_1360x784.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cJkT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda0f845f-7505-40cf-9543-7140b12ffe95_1360x784.png 424w, https://substackcdn.com/image/fetch/$s_!cJkT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda0f845f-7505-40cf-9543-7140b12ffe95_1360x784.png 848w, https://substackcdn.com/image/fetch/$s_!cJkT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda0f845f-7505-40cf-9543-7140b12ffe95_1360x784.png 1272w, https://substackcdn.com/image/fetch/$s_!cJkT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fda0f845f-7505-40cf-9543-7140b12ffe95_1360x784.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Midjourney/Gemini</figcaption></figure></div><p>This time last year I kicked off 2025 with a newsletter highlighting <a href="https://www.futureofbeinghuman.com/p/five-voices-five-pieces">five writers on Substack worth reading</a>.</p><p> Given how well received this was, I thought I&#8217;d do the same this year &#8212; but this time I thought I would extend the list beyond Substack and, importantly, celebrate writers who value substance and nuance over celebrity at a time when far too many people are simply out to grab a slice of your attention at any cost.</p><p>As a result, I&#8217;ve included three authors of old-school blogs who I&#8217;ve known for years, and who are still well worth paying attention to.  And I must confess that, in revisiting them, I&#8217;ve found it refreshing to find refuge from the algorithmically-optimized hustle of so much of today&#8217;s online content.</p><p>I hope you enjoy the five &#8220;voices&#8221; I&#8217;ve chosen and the accompanying pieces from each. And if you do, please support them by subscribing or following them on their respective platforms.</p><p>And here&#8217;s hoping that 2026 is a good year for you!</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TfIV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TfIV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 424w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 848w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1272w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TfIV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png" width="230" height="15.322802197802197" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:97,&quot;width&quot;:1456,&quot;resizeWidth&quot;:230,&quot;bytes&quot;:16823,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/180822189?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!TfIV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 424w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 848w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1272w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1456w" sizes="100vw"></picture><div></div></div></a></figure></div><h2>Athena Aktipis: Not For Peer Review</h2><p>Athena is a long-time colleague at Arizona State University, and a scientist, thinker and communicator who refuses to be constrained by the straitjacket of disciplinary conventions.</p><p>She&#8217;s a psychologist by training, and a leading expert in cooperation science, theory, and practice. As well as being Director of the <a href="https://www.cooperativefutures.org/">Cooperative Futures Institute</a> and <a href="https://www.aktipislab.org/">The Cooperation Lab</a> at ASU, she&#8217;s also the founder of <a href="https://www.zombifiedmedia.org/">Zombified Media</a> and the host of the podcast, <a href="http://www.zombified.org/">Zombified</a> &#8212; both of which cleverly intertwine her research and scholarship in ways that make them interesting, meaningful and accessible to a very wide audience!</p><p>Earlier this year Athena launched the Substack newsletter <em><a href="https://athenaaktipis.substack.com/">Not for Peer Review</a></em> as &#8220;a place to share unfiltered ideas about cooperation, evolution, and the future.&#8221; As she describes it, &#8220;It is a space for me to share with you ideas that don&#8217;t fit neatly in journals, and where I can bring my imagination, playful spirit, and speculative mind to the table.&#8221;</p><p>Back in November Athena wrote about <a href="https://athenaaktipis.substack.com/p/start-here-opening-my-notebooks">why she started writing on Substack</a> &#8212; it&#8217;s a great introduction to her writing here, and what to expect as a subscriber:</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:178105007,&quot;url&quot;:&quot;https://athenaaktipis.substack.com/p/start-here-opening-my-notebooks&quot;,&quot;publication_id&quot;:3293448,&quot;publication_name&quot;:&quot;Not for Peer Review&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!zXlc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9eea1ac8-f139-4f90-be0e-373b2360b7cb_1280x1280.png&quot;,&quot;title&quot;:&quot;I've Been Filling Notebooks for Decades. Here's Why I'm Finally Sharing Them&quot;,&quot;truncated_body_text&quot;:&quot;Since my teens, I&#8217;ve kept notebooks about all the things that fascinate me, from human nature to the nature of the universe, from the origins of life to the future of life on our planet.&quot;,&quot;date&quot;:&quot;2025-11-05T19:45:53.491Z&quot;,&quot;like_count&quot;:33,&quot;comment_count&quot;:17,&quot;bylines&quot;:[{&quot;id&quot;:117995987,&quot;name&quot;:&quot;Athena Aktipis&quot;,&quot;handle&quot;:&quot;athenaaktipis&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/042d27db-4810-4ecb-91e0-b10ae99defbd_1365x1365.jpeg&quot;,&quot;bio&quot;:&quot;Hi, I'm Athena Aktipis, Associate Professor in the Department of Psychology at Arizona State University &amp; Executive Director of the Cooperative Futures Institute. I study cooperation across systems, from cells to societies.&quot;,&quot;profile_set_up_at&quot;:&quot;2023-01-14T17:01:25.068Z&quot;,&quot;reader_installed_at&quot;:&quot;2025-11-05T20:18:46.503Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:3355085,&quot;user_id&quot;:117995987,&quot;publication_id&quot;:3293448,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:3293448,&quot;name&quot;:&quot;Not for Peer Review&quot;,&quot;subdomain&quot;:&quot;athenaaktipis&quot;,&quot;custom_domain&quot;:null,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;After publishing over 100 peer-reviewed papers&#8212;an essential but slow process&#8212;I&#8217;m opening decades of notebooks to explore the evolution of life, the practice of living well now, and the work of building a cooperative future together.&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9eea1ac8-f139-4f90-be0e-373b2360b7cb_1280x1280.png&quot;,&quot;author_id&quot;:117995987,&quot;primary_user_id&quot;:117995987,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2024-11-05T20:18:04.513Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Athena Aktipis&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:1,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;subscriber&quot;,&quot;tier&quot;:1,&quot;accent_colors&quot;:null},&quot;paidPublicationIds&quot;:[1829455,892978],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://athenaaktipis.substack.com/p/start-here-opening-my-notebooks?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!zXlc!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9eea1ac8-f139-4f90-be0e-373b2360b7cb_1280x1280.png" loading="lazy"><span class="embedded-post-publication-name">Not for Peer Review</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">I've Been Filling Notebooks for Decades. Here's Why I'm Finally Sharing Them</div></div><div class="embedded-post-body">Since my teens, I&#8217;ve kept notebooks about all the things that fascinate me, from human nature to the nature of the universe, from the origins of life to the future of life on our planet&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">5 months ago &#183; 33 likes &#183; 17 comments &#183; Athena Aktipis</div></a></div><h2>Christina Agapakis: Oscillator</h2><p>I first came across Christina Agapakis&#8217; work through my colleague Emma Frow. Emma&#8217;s a leading researcher and expert on the dynamics between emerging technologies and society, and is well known for her work in the field of synthetic biology. As part of this, she spent time working with Ginkgo Bioworks where Christina was the former Creative Director, and worked closely with her. </p><p>Christina launched the Substack newsletter <em><a href="https://www.oscillator.blog/">Oscillator</a></em> back in 2024. Through it she brings an intriguing blend of creative and scientific insight to her thinking and writing (as well as a good seal of nuance) that I really appreciate. And while Christina isn&#8217;t as prolific as some of the authors here, I&#8217;ve been enjoying going back through her articles from the past couple of years.</p><p>A good place to start on Oscillator is a piece from September on &#8220;<a href="https://www.oscillator.blog/p/vibe-coding-a-genome">Vibe coding a genome</a>,&#8221; where she explores how AI models are becoming increasingly powerful at constructing whole genomes &#8212; effectively vibe coding in the language of DNA:</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:174612092,&quot;url&quot;:&quot;https://www.oscillator.blog/p/vibe-coding-a-genome&quot;,&quot;publication_id&quot;:2710754,&quot;publication_name&quot;:&quot;Oscillator&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!86Vo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33c3d993-3052-412c-b54b-95cffb07a18d_400x400.png&quot;,&quot;title&quot;:&quot;Vibe coding a genome&quot;,&quot;truncated_body_text&quot;:&quot;Over the past decade, AI-generated faces have come sharply into focus. The transition from the blurry faces made by the first generative adversarial networks to the choppy strangeness of faces made by deep convolutional GANs to StyleGAN and thispersondoesnotexist.com&quot;,&quot;date&quot;:&quot;2025-09-26T14:46:34.760Z&quot;,&quot;like_count&quot;:38,&quot;comment_count&quot;:4,&quot;bylines&quot;:[{&quot;id&quot;:3445452,&quot;name&quot;:&quot;Christina Agapakis&quot;,&quot;handle&quot;:&quot;oscillator&quot;,&quot;previous_name&quot;:&quot;Oscillator&quot;,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50d04c72-1e5e-4b45-a8f2-a3c7df79c9bc_2400x1600.jpeg&quot;,&quot;bio&quot;:&quot;Bio/tech&quot;,&quot;profile_set_up_at&quot;:&quot;2024-06-16T10:01:55.885Z&quot;,&quot;reader_installed_at&quot;:&quot;2024-07-26T18:38:16.851Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:2750518,&quot;user_id&quot;:3445452,&quot;publication_id&quot;:2710754,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:2710754,&quot;name&quot;:&quot;Oscillator&quot;,&quot;subdomain&quot;:&quot;oscillator&quot;,&quot;custom_domain&quot;:&quot;www.oscillator.blog&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Biology and technology&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/33c3d993-3052-412c-b54b-95cffb07a18d_400x400.png&quot;,&quot;author_id&quot;:3445452,&quot;primary_user_id&quot;:3445452,&quot;theme_var_background_pop&quot;:&quot;#9A6600&quot;,&quot;created_at&quot;:&quot;2024-06-16T10:02:08.164Z&quot;,&quot;email_from_name&quot;:&quot;Christina Agapakis&quot;,&quot;copyright&quot;:&quot;Christina Agapakis&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;newspaper&quot;,&quot;is_personal_mode&quot;:false}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:1,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;subscriber&quot;,&quot;tier&quot;:1,&quot;accent_colors&quot;:null},&quot;paidPublicationIds&quot;:[822546,1868168],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.oscillator.blog/p/vibe-coding-a-genome?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!86Vo!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33c3d993-3052-412c-b54b-95cffb07a18d_400x400.png" loading="lazy"><span class="embedded-post-publication-name">Oscillator</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Vibe coding a genome</div></div><div class="embedded-post-body">Over the past decade, AI-generated faces have come sharply into focus. The transition from the blurry faces made by the first generative adversarial networks to the choppy strangeness of faces made by deep convolutional GANs to StyleGAN and thispersondoesnotexist.com&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">6 months ago &#183; 38 likes &#183; 4 comments &#183; Christina Agapakis</div></a></div><h2>Richard Jones: Soft Machines</h2><p><em>Soft Machines</em> is the first of three more traditional blogs I&#8217;ve chosen to highlight this year. I&#8217;ve known its author Richard Jones since my nanotechnology days from years ago, and have long respected his work and thinking. </p><p>Richard started the <em>Soft Machines</em> blog in 2004, around the time he published the book <em>Soft Machines: Nanotechnology and Life</em>. Since then it&#8217;s evolved along with his academic research and leadership. </p><p>Richard recently retired from the University of Manchester where he was Vice-President for Regional Innovation and Civic Engagement. And as, in his words, he has had a &#8220;bit more time&#8221; on his hands since then, he has been writing more for the blog.</p><p>I would recommend reading Richard for his highly-informed insights into technology and innovation. And a great place to start is his recent post on <em><a href="https://softmachines.org/?p=3218">The Year in Soft Machines</a>.</em></p><h2>Athene Donald: Athene Donald&#8217;s Blog on Occam&#8217;s Typewriter</h2><p>Another writer whom I&#8217;ve known for years, and someone who&#8217;s a traditional blogger in the best possible sense, is Athene Donald. Athene is Professor Emeritus at the University of Cambridge, a Fellow of the Royal Society, and former Master of Churchill College in Cambridge. She&#8217;s been a member of the <em><a href="https://occamstypewriter.org/">Occam&#8217;s Typewriter</a></em><a href="https://occamstypewriter.org/"> </a>blog collective since 2010, and still writes regularly for it.</p><p>Athene brings a lifetime of experience and insights to her writing, and is well worth reading for nuanced, candid and informed insights into academic and scientific culture, and their intersection with society more broadly.</p><p>A lovely introduction to Athene&#8217;s writing can be found in her recent post on <a href="https://occamstypewriter.org/athenedonald/2025/12/14/practice-and-experience/">Practice and Experience</a>, where she adroitly connects Jane Austen, taking up the piano again, and science/math education. </p><h2>Brigitte Nerlich: Making Science Public</h2><p>Finally, another long-time acquaintance and traditional blogger who&#8217;s well worth reading: Brigitte Nerlich.</p><p>I first got to know Brigitte through her work and writing (we&#8217;ve never met in person) during my nanotechnology days. She started blogging while a professor at the University of Nottingham in the UK, as part of a Leverhulme Trust funded research program on science and politics. That was back around 2014. Since then she has continued to write regularly (now as Professor Emeritus) for the blog <em><a href="https://makingsciencepublic.com/">Making Science Public</a></em>.</p><p>I must confess that I hadn&#8217;t read her blog for some time until recently, when she got in touch about a recent piece of mine. Since then though I&#8217;ve been enjoying rediscovering her work.</p><p>Brigitte&#8217;s expertise spans science, language and society, and as a result she brings a unique perspective to the interplay between science, technology, and society. As she writes on the blog&#8217;s About Page, she mainly focused on the &#8220;role of metaphors and other framing devices in science, policy and media&#8221; in her work, &#8220;focusing on climate change, infectious disease, genetics, genomics, epigenetics, synthetic biology, nanotechnology and now AI.&#8221;</p><p>Capturing her growing focus on AI, this is a great place to start exploring her work: <em><a href="https://makingsciencepublic.com/2025/11/28/observing-shifts-in-metaphors-for-ai-what-changed-and-why-it-matters/">Observing shifts in metaphors for AI: What changed and why it matters</a></em>.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TfIV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TfIV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 424w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 848w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1272w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TfIV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png" width="230" height="15.322802197802197" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:97,&quot;width&quot;:1456,&quot;resizeWidth&quot;:230,&quot;bytes&quot;:16823,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/180822189?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!TfIV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 424w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 848w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1272w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div>]]></content:encoded></item><item><title><![CDATA[Are we living in a foveated reality?]]></title><description><![CDATA[Video games trick players by only rendering in high detail what's being observed. So do spatial computing headsets. Even our eyes and brain do it. Maybe the universe does as well ...]]></description><link>https://www.futureofbeinghuman.com/p/are-we-living-in-a-foveated-reality</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/are-we-living-in-a-foveated-reality</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 21 Dec 2025 15:20:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Ltuo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94c14d7c-6ceb-452b-ba70-5a29edf38948_2912x1632.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ltuo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94c14d7c-6ceb-452b-ba70-5a29edf38948_2912x1632.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ltuo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94c14d7c-6ceb-452b-ba70-5a29edf38948_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!Ltuo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94c14d7c-6ceb-452b-ba70-5a29edf38948_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!Ltuo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94c14d7c-6ceb-452b-ba70-5a29edf38948_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!Ltuo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94c14d7c-6ceb-452b-ba70-5a29edf38948_2912x1632.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ltuo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94c14d7c-6ceb-452b-ba70-5a29edf38948_2912x1632.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/94c14d7c-6ceb-452b-ba70-5a29edf38948_2912x1632.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3016332,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/181789082?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94c14d7c-6ceb-452b-ba70-5a29edf38948_2912x1632.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ltuo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94c14d7c-6ceb-452b-ba70-5a29edf38948_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!Ltuo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94c14d7c-6ceb-452b-ba70-5a29edf38948_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!Ltuo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94c14d7c-6ceb-452b-ba70-5a29edf38948_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!Ltuo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94c14d7c-6ceb-452b-ba70-5a29edf38948_2912x1632.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Midjourney/Photoshop</figcaption></figure></div><p>Does reality exist beyond what we can see and experience in the moment? Or do we inhabit a universe that is profoundly good at creating the <em>illusion</em> of reality that matches what we&#8217;re currently experiencing?</p><p>It&#8217;s a weird idea and, truth be told, just a little fringe. But it&#8217;s also one that&#8217;s deeply intriguing &#8212; especially as we&#8217;re entering holiday season where it&#8217;s always good to have something that makes people stop and think to throw into the conversation!</p><p>It&#8217;s also an idea that, bizarrely, loosely connects speculations that we may be living in a simulation with some of the more counterintuitive aspects of quantum physics.</p><p>The connection here is the concept of <em>foveation</em> &#8212; and more specifically, the question of whether we are living in a <em>foveated reality</em>.</p><p>Foveation is what our eyes and brains do to make us feel as if we experiencing reality to the full while conserving &#8220;brain compute.&#8221; They do this by only rendering what is directly in front of us in high definition, using a very small but highly sensitive part of the retina &#8212; the fovea. Because that small patch of  &#8220;high definition reality&#8221; follows our visual focus of attention, we&#8217;re fooled into thinking we can see everything around us clearly.</p><p>The same technique is used in video games, where only the details that a game player is experiencing on their screen at any one time are rendered in high resolution. And it&#8217;s at the heart of how technologies like Apple&#8217;s spatial computing headset manage to be so immersive without you having a supercomputer strapped to your head &#8212; by only rendering in high definition what&#8217;s directly in front of your eyes.</p><p>In other words, we know that foveation is a technique that can very effectively fool us into feeling that we&#8217;re living in a richly detailed reality that extends far beyond our immediate perception. But practice, it&#8217;s only what&#8217;s right in front of us at any given time that&#8217;s actually being constructed &#8212; whether through the interplay of nerve signals from our fovea and our brain, the coordination between eye tracking sensors and processors in a headset, or the on-screen rendering of a vastly larger virtual game space.</p><p>And this raises something of a knotty question: if foveation works so well, how do we know that we are not, in fact, all characters in a massive computer simulation where what we <em>think</em> of as reality is, in fact, a <em>foveated reality</em> &#8212; one where the universe (or the mega-computer behind it) is generating the reality we perceive on the fly?</p><p>A crazy idea I know. And not one, if I&#8217;m honest, I buy into. But it is one that is a great catalyst for exploring interesting new ideas and possibilities &#8230; or simply throwing into conversations and seeing how people react! </p><p>It&#8217;s also an idea that I was reminded of quite delightfully a few days ago while in conversation with my good colleague and simulation theory expert Riz Virk on the latest episode of <em><a href="https://podcasts.apple.com/us/podcast/is-life-a-simulation-ai-games-and-the-future/id1771688480?i=1000741493951">Modem Futura</a></em>.</p><p>Riz has a new edition of his book <em><a href="https://www.zenentrepreneur.com/simulationhypothesis">The Simulation Hypothesis</a></em> out (highly recommended by the way), and my co-host Sean Leahy and I used this as a chance to catch up with him. (Riz has also appeared on the Joe Rogan Experience and Jordan Harbinger Show, so we were pretty stoked to get some time with him).</p><p>As always, the conversation was wide ranging in best possible way (and is perfect holiday listening &#8212; you can find links at the end of this newsletter). But this concept of a foveated reality ended up being central to it. </p><p>Of course, the idea that we&#8217;re all just players in a massive cosmic video game <em>feels</em> deeply improbable. And one rather large problem the idea faces is the sheer compute power it would take to simulate everything in the known universe down to the smallest subatomic particle. From everything we know, this feels preposterous &#8212; impossible even &#8212; given the mind boggling complexity and vastness of the universe. I&#8217;ve even used this argument myself against the simulation hypothesis in my own writing.</p><p>But what if the mega supercomputer simulating the universe (and everything in it) didn&#8217;t need to simulate everything, everywhere, all at once? What if all it had to do was to construct what any one person was experiencing right in front of them at any given time?</p><p>What if, in fact, this universal simulation computer was doing exactly what our eyes and brain, video games, and spatial headsets, already do?</p><p>This possibility has the potential to transform the <em>impossible</em> problem of simulated reality into a merely <em>improbable</em>  one &#8212; albeit one that is still mind-bendingly complex.</p><p>It still feels like something of a long shot to think that we&#8217;re all living in a massive computer simulation built on the back of foveated algorithmic optimizations. And yet, scientists are constantly chipping away at our conceptions of what reality actually is. And disturbing as this might seem, emerging ideas are bringing the possibility that our perceived reality is an illusion closer than we might think.</p><p>This hit home as I was reading a piece by another good colleague just after we spoke with Riz &#8212; the physicist and author Paul Davies. </p><p>Paul is an exceptionally well known science writer, communicator, and physicist, as well as being co-founder of the <a href="https://beyond.asu.edu/">Beyond Center</a> at ASU. He&#8217;s also someone who&#8217;s used to pushing back on established concepts of what we might think of as reality. </p><p>I&#8217;ve known Paul for some years now, and have always found his work and thinking wonderfully challenging and stimulating.  </p><p>In this case, what grabbed my attention was an article by him <a href="https://www.newscientist.com/article/2505823-why-quantum-mechanics-says-the-past-isnt-real/">for New Scientist</a> titled &#8220;Why quantum mechanics says the past isn&#8217;t real.&#8221;</p><p>Paul was writing about the physicist John Wheeler's <em>delayed choice experiment</em> &#8212; a now-verified quantum phenomenon where the type of measurement you make <em>today</em> appears to determine what a particle did in the <em>past</em>, even billions of years ago.</p><p>It&#8217;s a possibility that has garnered renewed attention with recent breakthroughs, and one that raises intriguing questions about the nature of reality &#8212; especially if there is a connection between the act of observing the universe in the present, and how the past is constructed at that point to match the reality you experience.</p><p>Which sounds just a little bit like the idea of foveated reality &#8230;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>One explanation of Wheeler&#8217;s idea of delayed choice is that there are many pasts, and the act of observation ties your particular present to one particular past. </p><p>This begins to open up the brain-aching possibility of multiple universes, where everything that could happen has happened in some parallel version of the reality we&#8217;re experiencing. </p><p>But it also begs the question of whether, just as in video games and spatial reality headsets &#8212; and our own brains &#8212; the reality we experience is merely a construction that just <em>feels</em> like it&#8217;s part of something bigger and more coherent.</p><p>In other words, is the past that led up to the present you are experiencing here and now, and the context within which you are experiencing it, simply an on-the-fly simulation created by some cosmic mega computer that has learned the trick of fooling you into believing that this foveated slice of experience is part of a larger coherent reality? </p><p>Probably not, but it&#8217;s an intriguing idea; especially as it begins to tentatively connect the worlds of quantum science and the simulation hypothesis.</p><p>The writer Douglas Adams famously &#8212; although admittedly fatuously &#8212; wrote that &#8220;time is an illusion, lunchtime doubly so.&#8221; Maybe he was foreshadowing emerging thinking around time, quantum physics, and the nature of reality. Or maybe he realized that the only way to explain the irreconcilable weirdness of life, the universe, and everything, is to recognize that we&#8217;re all inside some massive simulation &#8212; and at any one point all we see is our own foveated illusion of reality.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>Either way, whether we&#8217;re living in a foveated reality, wrapped up in the illusion of one, or are merely stuck with the <em>real</em> reality as we know it (or think we do), what intrigues me about fanciful excursions like this is that, bizarre as they may seem, they do provide creative jolt to the imagination that helps see and think about things in different ways.</p><p>And, of course, they provide the perfect fodder for messing with people&#8217;s heads over a long, lazy, holiday lunch &#128522;</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TfIV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TfIV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 424w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 848w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1272w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TfIV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png" width="230" height="15.322802197802197" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:97,&quot;width&quot;:1456,&quot;resizeWidth&quot;:230,&quot;bytes&quot;:16823,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/180822189?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!TfIV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 424w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 848w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1272w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><em>I&#8217;ll be taking a break from the Future of Being Human Substack newsletter over the new year break, but until next time, hope you have a restful and enjoyable end of the year, and see you in 2026!</em></p><p><em>And if you&#8217;re interested in listening to or watching the conversation with Riz &#8212; and biased as I am, I would highly recommend it for an entertaining and informative holiday listen over the break &#8212; you can catch it on <a href="https://podcasts.apple.com/us/podcast/is-life-a-simulation-ai-games-and-the-future/id1771688480?i=1000741493951">Apple Podcasts</a>, <a href="https://open.spotify.com/episode/12lvXMtH0T9Z3cORm3GdSf">Spotify</a>, or YouTube (below).</em></p><p><em>Happy listening!</em></p><div id="youtube2-BGpEKLt6vZ0" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;BGpEKLt6vZ0&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/BGpEKLt6vZ0?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I am, of course, pushing the alignment harder than is most likely warranted here, but the juxtaposition of the two ideas does raise some interesting possibilities.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Fans of Adams&#8217; <em>The Hitch Hiker&#8217;s Guide to the Galaxy</em> will, of course, realize that this idea is further explored in Zarniwoop&#8217;s &#8220;universe in an office&#8221; and the story that unfolds around it. But that&#8217;s a story for another day &#8230;</p></div></div>]]></content:encoded></item><item><title><![CDATA[Revisiting custom GPTs — the good, the bad, and the ... interesting! ]]></title><description><![CDATA[It's been a minute but I thought it time to take a fresh look at OpenAI's custom GPT feature. I found myself both frustrated and surprised!]]></description><link>https://www.futureofbeinghuman.com/p/revisiting-custom-gpts</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/revisiting-custom-gpts</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 14 Dec 2025 16:05:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!atPa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5c6a1d-a479-4cbc-b237-fb527bf5b7b7_2912x1632.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!atPa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5c6a1d-a479-4cbc-b237-fb527bf5b7b7_2912x1632.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!atPa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5c6a1d-a479-4cbc-b237-fb527bf5b7b7_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!atPa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5c6a1d-a479-4cbc-b237-fb527bf5b7b7_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!atPa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5c6a1d-a479-4cbc-b237-fb527bf5b7b7_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!atPa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5c6a1d-a479-4cbc-b237-fb527bf5b7b7_2912x1632.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!atPa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5c6a1d-a479-4cbc-b237-fb527bf5b7b7_2912x1632.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9a5c6a1d-a479-4cbc-b237-fb527bf5b7b7_2912x1632.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7566981,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/181544781?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5c6a1d-a479-4cbc-b237-fb527bf5b7b7_2912x1632.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!atPa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5c6a1d-a479-4cbc-b237-fb527bf5b7b7_2912x1632.png 424w, https://substackcdn.com/image/fetch/$s_!atPa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5c6a1d-a479-4cbc-b237-fb527bf5b7b7_2912x1632.png 848w, https://substackcdn.com/image/fetch/$s_!atPa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5c6a1d-a479-4cbc-b237-fb527bf5b7b7_2912x1632.png 1272w, https://substackcdn.com/image/fetch/$s_!atPa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5c6a1d-a479-4cbc-b237-fb527bf5b7b7_2912x1632.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Midjourney</figcaption></figure></div><p>OpenAI launched custom GPT&#8217;s <a href="https://openai.com/index/introducing-gpts/">a couple of years ago</a> in November 2023. They were a neat way of creating a ChatGPT-powered custom AI bot without any coding, that could be shared with others. </p><p>At the time, I played around with them and found them cute and interesting, but ultimately limited frustrating. </p><p>Two years on though, more and more people seem to be folding custom GPTs into their workflow and offering them to others &#8212; sometimes as part of a professional service. And so I thought it was about time I took another look &#8212; just in case anything had changed.</p><p>The custom GPT &#8220;Grand Challenge&#8221; I chose was to create a GPT that would allow users to engage with over 300 posts on this Substack. I&#8217;ve long been frustrated by how quickly some of these become buried and invisible, and so it seemed like a no-brainer to create an app that allowed readers to rediscover them in interesting and useful ways.</p><p>With hindsight, this was probably not a good project to test OpenAI&#8217;s GPT builder platform with, as will become apparent shortly. But it did end up reminding me of where some of the big limitations are with GPTs that depend on  a simplified Retrieval-Augmented Generation (RAG) approach.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>My plan was a good one &#8212; or so I thought &#8212; and a pretty sophisticated one to boot! </p><p>I started off by exporting all my Substack posts (including pre-launch imported posts) into a database file (324 of them), and set to work with Claude (using Opus 4.2) to synthesize this into a well-structures JSON file (data stored as structured text) with summaries of each post and links to the original &#8212; including keywords and categories. </p><p>The idea was that the GPT would refer to the file every time it was asked a question, and provide blindingly insightful insights based on 324 posts, including links to sources. And the summaries and meta data associated with each post meant that the GPT wasn&#8217;t overwhelmed with several megabytes of data. </p><p>What could possibly go wrong?</p><p>Lots as it turns out, and as I should have known.</p><p>But before I get there, on to part 2 of the plan: </p><p>I next asked Claude to construct four additional documents:</p><p>First, I asked it to develop an author voice style guide, based on the posts &#8212; I wanted the GPT to sound like me.</p><p>Next, I asked for a guide to my personal and professional perspectives, again pulling mainly from the posts.</p><p>Thirdly, I asked it to develop a core set of instructions for the GPT.</p><p>And finally, I asked for an additional document containing detailed guidance, as the core GPT instructions are limited to less than 8000 characters.</p><p>In other words, working with Claude (which is my preference for developing complex document ecosystems like this) I started to construct an extensive instruction set and knowledge base for the GPT that would ensure that it was powerful, smart, and accurate.</p><p>Uploading the files to my new GPT and running it, it was fantastic! The responses were articulate, informed, insightful, serendipitous, and persuasive. In other words, everything I hoped for.</p><p>But they were also deeply flawed.</p><p>What I&#8217;d forgotten was that OpenAI&#8217;s machinery behind the GPTs chunks and segments the uploaded knowledge documents, meaning that. at any one time it only sees a fraction of them. </p><p>In effect, knowledge retrieval was only partial at any given time, and was based on context. And so while the GPT could respond with eloquence and beauty, accuracy and usefulness flew right out of the window. </p><p>It felt just like being back in November 2023!</p><p>Not to be beaten, I worked on seeing if I could find a way to overcome the limitations. I spent hours with Claude, iterating and reiterating, trying different approaches (including moving from a JSON to a plain text file), getting creative with the instructions, and occasionally losing my rag (pun very much intended!).</p><p>But to no avail. The GPT continued to be superficially compelling and substantively flawed (two tests which it never completely succeeded at were being able to reliably retrieve the oldest post on the Substack, and breaking away from repeatedly referring to a small number of posts once it had latched onto them.)</p><p>And so I came to the disappointing conclusion that custom GPTs &#8212; at least, those built using OpenAI&#8217;s custom GPT builder &#8212; remain deeply flawed. They&#8217;re fine for playing around with, and possibly good for some tasks if you understand their limitations. But they are still deeply limited by partial and opaque retrieval, are deeply influenced by internal heuristics, and have a tendency to favor beautiful responses over accurate or reliable ones. They are highly sensitive to context, and change behavior depending on the subscription plan a user has and the model being used.</p><p>In other words, they are interesting and persuasive (and incredibly easy to spin up), but deeply unreliable.</p><p>But then &#8230;</p><p>Having realized that what I wanted was beyond the capabilities of a custom GPT, I thought why not lean into the flaws?</p><p>And so, still working with Claude, I added instructions that introduced a dash of epistemic humility and reflexivity into the GPT&#8217;s character. And I went line by line through the instruction files to ensure that the GPT represented what I was looking for, rather than just what Claude thought I wanted.</p><p>I also, at some point, added summary files of my three books, just because Claude seemed to think they were important. </p><p>The resulting GPT was still badly flawed. But it now realized this, and was happy to talk about it! And this made it far more interesting to engage with.</p><p>The result is a GPT that is sometimes brilliant and sometimes not, but is usually aware of its limitations and happy to help users work around them &#8212; or simply embrace them.</p><p>It also provides, as it turns out, a great meta-reflection on generative AI and our evolving relationships with it.</p><p>If you want to check out what this reflectively flawed GPT is like, please do check it out &#8212; the link&#8217;s below:</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://chatgpt.com/g/g-693c248046148191a999516a4457e7e6-the-future-of-being-human&quot;,&quot;text&quot;:&quot;Future of being Human Custom GPT&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://chatgpt.com/g/g-693c248046148191a999516a4457e7e6-the-future-of-being-human"><span>Future of being Human Custom GPT</span></a></p><p></p><p>The bottom line though is that not a lot has changed in the past two years as far as I can see in OpenAI&#8217;s custom GPT land &#8212; although I&#8217;m sure there&#8217;ll be a deluge of commenters telling me how wrong I am.</p><p>Which in itself will be great, as I&#8217;d like to think that some things have got better since November 2023 &#128522;</p><p><em>(Update: I&#8217;ve posted a couple of links to alternatives to OpenAI&#8217;s custom GPTs in the comments, including Gemini Gems and Google&#8217;s NotebookLM. Each has different pros and cons, but worth exploring &#8212; as is the GPT above.)</em></p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TfIV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TfIV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 424w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 848w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1272w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TfIV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png" width="230" height="15.322802197802197" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:97,&quot;width&quot;:1456,&quot;resizeWidth&quot;:230,&quot;bytes&quot;:16823,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/180822189?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!TfIV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 424w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 848w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1272w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h2>Postscript</h2><p>Just in case anyone&#8217;s interested, you can examine the GPT files I used below (not including the knowledge base as it&#8217;s rather large) &#8212; although please be aware that what&#8217;s powering the GPT when you read this may be different, because of course another issue with the whole system is that <em>there is no version control</em>!</p><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail-default" src="https://substackcdn.com/image/fetch/$s_!0Cy0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Fattachment_icon.svg"></image><div class="file-embed-details"><div class="file-embed-details-h1">Gpt Instructions Core</div><div class="file-embed-details-h2">62.7KB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://www.futureofbeinghuman.com/api/v1/file/a3cf447a-2f25-422e-8adb-31f9b2278227.pdf"><span class="file-embed-button-text">Download</span></a></div><a class="file-embed-button narrow" href="https://www.futureofbeinghuman.com/api/v1/file/a3cf447a-2f25-422e-8adb-31f9b2278227.pdf"><span class="file-embed-button-text">Download</span></a></div></div><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail-default" src="https://substackcdn.com/image/fetch/$s_!0Cy0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Fattachment_icon.svg"></image><div class="file-embed-details"><div class="file-embed-details-h1">Gpt Detailed Guidance</div><div class="file-embed-details-h2">209KB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://www.futureofbeinghuman.com/api/v1/file/3c92db9e-5011-45a7-930c-35e210f4e85a.pdf"><span class="file-embed-button-text">Download</span></a></div><a class="file-embed-button narrow" href="https://www.futureofbeinghuman.com/api/v1/file/3c92db9e-5011-45a7-930c-35e210f4e85a.pdf"><span class="file-embed-button-text">Download</span></a></div></div><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail-default" src="https://substackcdn.com/image/fetch/$s_!0Cy0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Fattachment_icon.svg"></image><div class="file-embed-details"><div class="file-embed-details-h1">Author Voice Style Guide</div><div class="file-embed-details-h2">59.2KB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://www.futureofbeinghuman.com/api/v1/file/b3009ceb-f329-4683-a272-999ec99f555b.pdf"><span class="file-embed-button-text">Download</span></a></div><a class="file-embed-button narrow" href="https://www.futureofbeinghuman.com/api/v1/file/b3009ceb-f329-4683-a272-999ec99f555b.pdf"><span class="file-embed-button-text">Download</span></a></div></div><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail-default" src="https://substackcdn.com/image/fetch/$s_!0Cy0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Fattachment_icon.svg"></image><div class="file-embed-details"><div class="file-embed-details-h1">Author Background Perspectives</div><div class="file-embed-details-h2">57.5KB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://www.futureofbeinghuman.com/api/v1/file/2d57ccbe-3791-4dcf-ad47-b5252d4d117c.pdf"><span class="file-embed-button-text">Download</span></a></div><a class="file-embed-button narrow" href="https://www.futureofbeinghuman.com/api/v1/file/2d57ccbe-3791-4dcf-ad47-b5252d4d117c.pdf"><span class="file-embed-button-text">Download</span></a></div></div><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail-default" src="https://substackcdn.com/image/fetch/$s_!0Cy0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Fattachment_icon.svg"></image><div class="file-embed-details"><div class="file-embed-details-h1">Films From The Future Summary</div><div class="file-embed-details-h2">61.9KB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://www.futureofbeinghuman.com/api/v1/file/62be6eea-7f55-4f6b-a1e2-948b3469930b.pdf"><span class="file-embed-button-text">Download</span></a></div><a class="file-embed-button narrow" href="https://www.futureofbeinghuman.com/api/v1/file/62be6eea-7f55-4f6b-a1e2-948b3469930b.pdf"><span class="file-embed-button-text">Download</span></a></div></div><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail-default" src="https://substackcdn.com/image/fetch/$s_!0Cy0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Fattachment_icon.svg"></image><div class="file-embed-details"><div class="file-embed-details-h1">Future Rising Summary</div><div class="file-embed-details-h2">73.1KB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://www.futureofbeinghuman.com/api/v1/file/bd7118be-6696-46ce-9340-829315253e92.pdf"><span class="file-embed-button-text">Download</span></a></div><a class="file-embed-button narrow" href="https://www.futureofbeinghuman.com/api/v1/file/bd7118be-6696-46ce-9340-829315253e92.pdf"><span class="file-embed-button-text">Download</span></a></div></div><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail-default" src="https://substackcdn.com/image/fetch/$s_!0Cy0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack.com%2Fimg%2Fattachment_icon.svg"></image><div class="file-embed-details"><div class="file-embed-details-h1">Ai Art Being Human Summary</div><div class="file-embed-details-h2">84.4KB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://www.futureofbeinghuman.com/api/v1/file/dca6a1c4-d9df-44c8-b1a9-715e3cbf329a.pdf"><span class="file-embed-button-text">Download</span></a></div><a class="file-embed-button narrow" href="https://www.futureofbeinghuman.com/api/v1/file/dca6a1c4-d9df-44c8-b1a9-715e3cbf329a.pdf"><span class="file-embed-button-text">Download</span></a></div></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I should be clear here that, while the OpenAI GPT Builder is an incredibly easy way to build customized GPT applications, it is also one of the least robust ways of doing this. My purpose here was not to create a robust app &#8212; for that I&#8217;d have used different approaches &#8212; but to stress-test OpenAI&#8217;s platform, because the lack of friction between idea and app here makes it especially attractive to users.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Do universities have a future in Trump's plans to accelerate scientific discovery through the use of AI? ]]></title><description><![CDATA[The recently announced Genesis Mission sets out to transform how science is done in the US. Yet it's a mission that places national labs&#8212;and not universities&#8212;in the driving seat.]]></description><link>https://www.futureofbeinghuman.com/p/universities-genesis-mission</link><guid isPermaLink="false">https://www.futureofbeinghuman.com/p/universities-genesis-mission</guid><dc:creator><![CDATA[Andrew Maynard]]></dc:creator><pubDate>Sun, 07 Dec 2025 15:52:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!-iLS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a759d7-6c27-4ab0-8cb6-074efd5f3e2c_2048x1152.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-iLS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a759d7-6c27-4ab0-8cb6-074efd5f3e2c_2048x1152.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-iLS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a759d7-6c27-4ab0-8cb6-074efd5f3e2c_2048x1152.png 424w, https://substackcdn.com/image/fetch/$s_!-iLS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a759d7-6c27-4ab0-8cb6-074efd5f3e2c_2048x1152.png 848w, https://substackcdn.com/image/fetch/$s_!-iLS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a759d7-6c27-4ab0-8cb6-074efd5f3e2c_2048x1152.png 1272w, https://substackcdn.com/image/fetch/$s_!-iLS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a759d7-6c27-4ab0-8cb6-074efd5f3e2c_2048x1152.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-iLS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a759d7-6c27-4ab0-8cb6-074efd5f3e2c_2048x1152.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a7a759d7-6c27-4ab0-8cb6-074efd5f3e2c_2048x1152.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2807557,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/180822189?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a759d7-6c27-4ab0-8cb6-074efd5f3e2c_2048x1152.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-iLS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a759d7-6c27-4ab0-8cb6-074efd5f3e2c_2048x1152.png 424w, https://substackcdn.com/image/fetch/$s_!-iLS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a759d7-6c27-4ab0-8cb6-074efd5f3e2c_2048x1152.png 848w, https://substackcdn.com/image/fetch/$s_!-iLS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a759d7-6c27-4ab0-8cb6-074efd5f3e2c_2048x1152.png 1272w, https://substackcdn.com/image/fetch/$s_!-iLS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a759d7-6c27-4ab0-8cb6-074efd5f3e2c_2048x1152.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image: Midjourney</figcaption></figure></div><p>A couple of weeks ago the US Government announced an audacious plan to &#8220;unleash a new age of AI&#8209;accelerated innovation and discovery that can solve the most challenging problems of this century.&#8221; Drawing heavily on the promise of AI-accelerated discovery, The Genesis Mission&#8212;established by an <a href="https://www.whitehouse.gov/presidential-actions/2025/11/launching-the-genesis-mission/">Executive Order signed by President Trump</a> and led by the <a href="https://genesis.energy.gov/">Department of Energy</a>&#8212;has its sights set firmly on ensuring &#8220;America&#8217;s technological dominance and global strategic leadership&#8221; in an increasingly turbulent world. </p><p>The plan is big, bold, and could be a game changer for how science is done in an age of AI. But of course, being a consummate academic, my first question was far more opportunistic: &#8220;What does this mean for university funding?&#8221; </p><p>After all, America&#8217;s research universities have long been seen as the engines of innovation that underpin technological progress.</p><p>The short answer is not a lot&#8212;at least at present.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> But this doesn&#8217;t mean that there isn&#8217;t room at the table for enterprising universities that are willing to focus more on what they can give to the mission, rather than what they can gain from it.</p><h2>Accelerating applied research through national labs</h2><p>Reading through the initial documents that have been released on the Genesis Mission, it is clear that this is intended as an initiative that&#8217;s to be driven by the nation&#8217;s national labs and through public private partnerships, with universities only getting a couple of passing mentions that feel more obligatory than substantive. </p><p>The <a href="https://www.whitehouse.gov/presidential-actions/2025/11/launching-the-genesis-mission/">Executive Order</a>, for instance, mentions combining the efforts of &#8220;brilliant American scientists&#8221; and &#8220;world-renowned universities&#8221; with American businesses and existing Federal resources &#8220;to achieve dramatic acceleration in AI development and utilization&#8221;&#8212;but the document primarily places the Mission in the hands of the nation&#8217;s national labs and corporate partners (including most of the major US-based AI-forward companies).</p><p>The Department of Energy&#8217;s <a href="https://genesis.energy.gov/">Genesis Mission website</a> is a little more specific, noting that the &#8220;Genesis Mission brings together the Department of Energy&#8217;s 17 National Laboratories with America&#8217;s leading universities and industry, including pioneers in artificial intelligence, computing, materials, and energy, to build the most powerful scientific platform ever to solve national challenges.&#8221; Yet click on the list of initial collaborators, and all 56 of them are industry partners.</p><p>In a <a href="https://www.energy.gov/science/articles/under-secretary-gils-letter-community">Letter to the Community</a> following the announcement of the Genesis Mission, Mission Director and DOE Under Secretary of Science Dario Gil also references university partners, acknowledging the US&#8217; universities as a key pillar of the American innovation system. But again this feels pro forma, with the primary thrust on applied and mission-driven work coming from the national labs and industry partners.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>The result is a sense that there&#8217;s a necessary nod to university-based research as it is such an integral part of the US innovation ecosystem, but that the Genesis Mission is far more focused on achieving impact at speed without relying too heavily on universities. </p><p>This is perhaps not surprising given the current administration&#8217;s very tangible frustrations with academic establishments&#8212;frustrations that stretch beyond ideological clashes that attract media headlines, and get to the heart of the perceived value that universities deliver to the American people. And here, there&#8217;s palpable discontent around the use of federal funds to support research that doesn&#8217;t appear to demonstrably serve the national interest.</p><p>Of course, it would be easy to criticize the White House for not prioritizing university funding more overtly in the Genesis Mission. After all, where else will the intellectual fuel for true innovation come from, if not from the nation&#8217;s foremost academic institutions?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> But rather than fall back on protectionist critique, I suspect it&#8217;s far more useful to take a step back and ask how US universities might bring <em>true</em> value to an initiative like the Genesis Mission, rather than get tied up in knots justifying <em>why</em> they deserve a slice of the funding pie.</p><p>Before I get there though, it&#8217;s worth taking a step back and considering more closely what the Genesis Mission is and what it sets out to achieve.</p><h2><strong>A new approach to solutions-focused science</strong></h2><p>The Genesis Mission was launched on November 24<sup>th</sup> as (to paraphrase the Executive Order) a massive coordinated national effort focused on dramatically accelerating scientific discovery, strengthening national security, securing energy dominance, enhancing workforce productivity, and multiplying the return on taxpayer investment into research and development&#8212;all with the aim of furthering America&#8217;s technological dominance and global strategic leadership.</p><p>To achieve this, the Mission sets out to combine vast data reserves covering decades of federally funded research with advanced AI models, high performance computing, and cutting-edge quantum technologies&#8212;all with the intent to, in the words of <a href="https://www.energy.gov/science/articles/under-secretary-gils-letter-community">Dario Gil</a>, &#8220;double the productivity and impact of American science and engineering within a decade (and in half that time across our National Laboratory complex).&#8221;</p><p>Just let that sink in for a moment: The aim here is to leverage massive federal data reserves, AI, massive compute, and even quantum technologies, to <strong>double</strong> science and engineering productivity and impact across US National Labs within <strong>5 years</strong>. </p><p>That is one audacious aim!</p><p>Within this, the Genesis Mission explicitly encompasses <a href="https://genesis.energy.gov/">transformative breakthroughs</a> in fields closely coupled to national security and global technological dominance that range from advanced manufacturing and high performance/critical materials, to new energy sources and cutting-edge molecular medicine. And to achieve this it sets out to massively leverage resources within Federal labs, all while drawing on industry partnerships and following highly efficient research strategies that are focused on very specific outcomes. </p><p>In other words, this is an initiative that is committed to speed, efficiency, and impact, with little tolerance for collaborations that may hold it back.</p><p>And in an administration that&#8217;s already questioning the value of university-based research, I suspect that university funding is in danger of falling, at least in part, in this latter category.</p><p>Yet despite this, my sense is that universities <em>do</em> have considerable value to bring to the Genesis Mission, as long as their leadership and researchers are willing to think critically and creatively about what they have to offer, and what they do not.</p><h2><strong>Rethinking the academic value proposition</strong></h2><p>So how might universities bring value to an initiative that seems, on the surface at least, not to need them?</p><p>One thing&#8217;s for sure: simply claiming that universities deserve to be part of the Genesis Mission from a position of entitlement is not going to fly. Rather, I strongly suspect that there will need to be some willingness to examine where the academic value proposition is weak, and explore ways to genuinely strengthen it.</p><p>Admittedly, this will be tough given that academia has a long and illustrious tradition of moving slowly, being caught up in its own sense of self-importance, and having a tendency to deliver on what researchers <em>think</em> is important, rather than what the organizations funding them actually want.</p><p>That said, I believe that there are opportunities here for universities that are willing to ask <em>how</em> they can contribute, rather than simply <em>what</em> they can get out of the Genesis Mission.</p><p>So what might these look like? There are, of course, many possibilities here. But I did want to present just three that, together, could form the beginnings of a Mission-aligned partnership strategy:</p><h2><strong>1. Research and scholarship that supports high speed/high serendipity discovery</strong></h2><p>The Genesis Mission is all about speed. But within research circles there&#8217;s often an intuitive sense that going fast makes it harder to create the space for those unexpected discoveries that so often lead to transformative steps forward.</p><p>In other words, it can often feel that there&#8217;s a tension between speed and serendipity.</p><p>As it turns out, this is an oversimplification. But the framing of serendipity and speed as two factors that are important to the success of the Genesis Mission does provide a useful illustrative model for exploring where universities may bring value to the initiative:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NUod!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90248d04-31cc-4ece-88d0-48fdc6ca1ea6_672x613.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NUod!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90248d04-31cc-4ece-88d0-48fdc6ca1ea6_672x613.png 424w, https://substackcdn.com/image/fetch/$s_!NUod!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90248d04-31cc-4ece-88d0-48fdc6ca1ea6_672x613.png 848w, https://substackcdn.com/image/fetch/$s_!NUod!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90248d04-31cc-4ece-88d0-48fdc6ca1ea6_672x613.png 1272w, https://substackcdn.com/image/fetch/$s_!NUod!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90248d04-31cc-4ece-88d0-48fdc6ca1ea6_672x613.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NUod!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90248d04-31cc-4ece-88d0-48fdc6ca1ea6_672x613.png" width="420" height="383.125" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/90248d04-31cc-4ece-88d0-48fdc6ca1ea6_672x613.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:613,&quot;width&quot;:672,&quot;resizeWidth&quot;:420,&quot;bytes&quot;:79399,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/180822189?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90248d04-31cc-4ece-88d0-48fdc6ca1ea6_672x613.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NUod!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90248d04-31cc-4ece-88d0-48fdc6ca1ea6_672x613.png 424w, https://substackcdn.com/image/fetch/$s_!NUod!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90248d04-31cc-4ece-88d0-48fdc6ca1ea6_672x613.png 848w, https://substackcdn.com/image/fetch/$s_!NUod!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90248d04-31cc-4ece-88d0-48fdc6ca1ea6_672x613.png 1272w, https://substackcdn.com/image/fetch/$s_!NUod!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90248d04-31cc-4ece-88d0-48fdc6ca1ea6_672x613.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">A discovery serendipity-speed matrix</figcaption></figure></div><p>By considering four domains of discovery associated with serendipity and speed, it becomes possible to identify where the Genesis Mission ideally needs to be situated to succeed, where it potentially lands without strategic input from universities, and what it might take to get it to where it needs to be.</p><p><strong>Quarter 1 of the matrix</strong>&#8212;highly serendipitous but slow rates of research&#8212;is most often associated with academic initiatives. As it turns out this isn&#8217;t exactly true, and a growing body of work is showing how serendipity can cut across different research environments (for instance Ohid Yaqub&#8217;s 2018 paper on a taxonomy and theory of serendipity is highly recommended here.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>) Yet &#8220;slow with a promise of serendipity&#8221; remains an approach to knowledge generation that continues to be synonymous in many people&#8217;s minds with leading research universities.</p><p>In contrast, <strong>quarter 3 of the matrix</strong>&#8212;high speed but not very serendipitous research&#8212;tends to be more closely associated with industry and government labs. Again, the mission- and outputs-driven research associated these labs <em>can</em> be serendipitous. But it&#8217;s not unusual to find the freedom and flexibility necessary for unexpected discoveries to be curbed in the name of efficiency.</p><p>Compared to these, <strong>quarter 2</strong>&#8212;low speed and not very serendipitous research&#8212;is where no-one wants to be, but is where I suspect a lot of federally funded research ends up. This is a quadrant where research is constrained by grueling grant-cycles, overbearing bureaucracy, and the need to deliver on performance indicators that don&#8217;t necessarily align with fast and creative discovery.</p><p>Then we have <strong>quarter 4</strong>&#8212;high speed and highly serendipitous research. This is where the Genesis Mission aspires to be. It combines the speed of discovery associated with commercial labs, with the step-change breakthroughs more usually associated with academia. Yet this is a highly elusive quadrant, and one that there&#8217;s no guarantee that the Genesis Mission will inhabit&#8212;simply because resources and ambition alone do not automatically lead to serendipity <em>and</em> speed.</p><p>The good news for the Federal Government is that we know it&#8217;s possible for private and government-led research initiatives to exist in this quadrant. Bell Labs, DARPA, the Apollo Missions, and the Manhattan Project, are all good examples of success stories here. But these are not the norm. To intentionally and efficiently move the Mission from quadrant 3 to quadrant 4 needs theory and understanding on how this might be achieved&#8212;especially where emerging technologies like AI and quantum tech are being leveraged. </p><p>And this is where university-based research and scholarship potentially comes into its own. While universities may not be able to compete with national labs and commercial initiatives on speed, they excel at developing new theories and models&#8212;especially where these benefit from insights and methods that transcend traditional disciplinary boundaries. And I suspect that this is a research domain where slow speed but high serendipity research&#8212;quadrant 1 research&#8212;could be influential in informing and catalyzing transitions from quadrant 3 to quadrant 4.</p><h2><strong>2. Learning and education pathways that support high speed/high serendipity enterprises</strong></h2><p>While new research into how to better-enable targeted high speed/high serendipity research is necessary, there&#8217;s only so far that it can go without new findings being translated into new skills and practices. And for this to happen, there will need to be new learning and education opportunities and pathways.</p><p>This is important generally for supporting transitions from quadrant 3 to quadrant 4 in mission-driven research. But it is likely to be vastly <em>more</em> important around efforts to leverage AI- and quantum-enhanced discovery, as these take us into uncharted territories for which no robust learning and education pathways currently exist.</p><p>And here it&#8217;s hard to imagine how any institutions other than research universities could fill this need. They already form the backbone of higher education in the US. And as a result they have a unique capacity to translate cutting-edge research into cutting-edge education and learning.</p><p>Given this, I would argue that universities have a unique and critical role to play in equipping scientists, engineers and technologists engaged in the Genesis Mission to accelerate mission-driven and high-value research and discovery&#8212;especially where this involves accelerating research through leveraging emerging AI and quantum technologies.</p><h2><strong>3. Pivoting to new university models</strong></h2><p>Given the somewhat peripheral potential role of universities in the Genesis Mission as it&#8217;s been articulated so far, maybe it&#8217;s time to re-examine what they bring to the table. </p><p>This is, of course, a rather contentious suggestion, especially given the central role of research universities in the US&#8217; innovation enterprise over the past 80 years. Yet it&#8217;s one that needs to be taken seriously as the Genesis Mission indicates increasing movement away from the post-World War II model of the research&#8211;innovation pipeline established by Vannevar Bush.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> </p><p>I&#8217;m not sure that the national labs-focus of the Genesis Mission is sufficient on its own to justify exploring new university models. But if this is taken as one of a growing number of indicators that research universities are out of step with societal expectations, at the very least it behooves them to pay attention.</p><p>This, of course, is not new news in the US. Alongside the current administration&#8217;s heavy-handed &#8220;re-evaluation&#8221; of research funding strategies and priorities, there&#8217;s been growing introspection amongst academic institutions and leaders for some time around how universities might better-align with societal perceptions, expectations and public-oriented value-creation. And this extends to ensuring that research more clearly benefits&#8212;or is understood to benefit&#8212;national goals and priorities.</p><p>There are, of course, complexities and nuances here. There remain compelling arguments for research enterprises that are insulated against the ever-changing whims of political expedience and public opinion, and that are designed to deliver long-term public benefit through research that&#8217;s driven by curiosity, a love of discovery, and the freedom to ask &#8220;why,&#8221; &#8220;how,&#8221; and &#8220;what if&#8221;&#8212;without necessarily knowing where they&#8217;ll lead. This is how serendipitous discoveries are often made, how the limits of understanding are expanded in unexpected ways, and how the solid foundations for future research are laid.</p><p>But there is no reason why universities cannot also explore how they might move closer to quadrant 4&#8212;or even from quadrant 2 to quadrant 4&#8212;in the serendipity-speed matrix, by adding greater speed and relevance to academic research initiatives.</p><p>Perhaps not surprisingly, this reflects initiatives that are already taking place within some institutions. Anyone who&#8217;s familiar with my own institution&#8212;Arizona State University&#8212;will recognize something of ASU&#8217;s redefining of the <a href="https://newamericanuniversity.asu.edu/">&#8220;New American University&#8221;</a> here (although ASU&#8217;s ambitions lie far beyond simply reimagining the research enterprise). Yet given the sea-change in research and discovery that AI is ushering in, together with the potential importance of initiatives like the Genesis Mission and broader discontent over the value of university-driven research, now would seem to be a good time for more creative thinking around what a public-serving research university might look like.</p><p>For instance, do emerging AI models and capabilities open up new opportunities for research universities to combine serendipity and speed? Can AI be leveraged as a serendipity-accelerator by scientists? How feasible is it for universities to spin up their own versions of the Genesis Mission&#8212;AI research labs that synergistically feed off human ingenuity and machine intelligence? Are such AI research labs limited to certain domains of discovery, or could universities forge new pathways into domains that transcend conventional disciplinary boundaries? And could such initiatives become valuable partners in enterprises like the Genesis Mission and beyond?</p><p>All of this feels possible, as long as there&#8217;s a willingness within universities to pivot away from convention and tradition, and toward institutional structures and missions that respond to shifting expectations and emerging possibilities.</p><p>The question is, is the will there&#8212;and the impetus&#8212;to change?</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TfIV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TfIV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 424w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 848w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1272w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TfIV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png" width="230" height="15.322802197802197" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:97,&quot;width&quot;:1456,&quot;resizeWidth&quot;:230,&quot;bytes&quot;:16823,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.futureofbeinghuman.com/i/180822189?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TfIV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 424w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 848w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1272w, https://substackcdn.com/image/fetch/$s_!TfIV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e2f9832-ebeb-40c8-a307-164ce0395259_2000x133.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>And this brings us back to where we started: is there a place for research universities in the Genesis Mission? I have to believe there is. But it&#8217;s far from guaranteed. Rather, it&#8217;s up to universities&#8212;and more explicitly their leadership and members&#8212;to find and articulate the value they potentially bring to the table, rather than assume it&#8217;s a given.</p><p>Because, at present, I&#8217;m not sure it is.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>It&#8217;s worth noting that, while currently available information highlights the central role of National Labs and Federal facilities and resources, it&#8217;s possible that in the future, funding opportunities will arise through the DOE and other mission-driven agencies, and even possibly key research agencies such as the NIH and NSF. However, a pragmatic read of the current landscape would indicate that universities will need to clearly demonstrate how they will substantive contribute to the Mission&#8212;along with clearly articulating mission-critical returns on investment&#8212;if they want to position themselves as partners and collaborators.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>As an aside, I would highly recommend reading Dario&#8217;s letter. In an administration known for its political posturing in official documents, the letter is a breath of fresh air&#8212;authentic, visionary, inspiring, and one that brings people together toward a common purpose rather than dividing them.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>I am, of course, being more than a little tongue in cheek here.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Ohid Yaqub (2018) <em>Serendipity: Towards a taxonomy and a theory.</em> Research Policy 47 (2018) 169-179. DOI: <a href="http://dx.doi.org/10.1016/j.respol.2017.10.007">10.1016/j.respol.2017.10.007</a>. Yaqub identifies and describes four distinct typologies of serendipity: Targeted research that solves unexpected problem (Walpolian serendipity); targeted research that solves the problem-in-hand via an unexpected route (Mertonian serendipity); untargeted research that nevertheless solves an immediate problem (Bushian serendipity, after Vannevar Bush, not GW Bush); and untargeted research that leads to solutions to later problems (Stephanian serendipity). These cover the gamut of curiosity-driven to mission-driven research and discovery. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Vannevar Bush&#8217;s seminal 1945 report <em><a href="https://nsf-gov-resources.nsf.gov/2023-04/EndlessFrontier75th_w.pdf">Science: The Endless Frontier</a></em> established the so-called linear model of innovation that placed federally funded basic research carried out by universities at one end of the spectrum, and applied research in federal labs and industry at the other. And while the model has been challenged and transformed over the years to recognize the complex and multidimensional relationship between open-ended and outputs-driven research, it&#8217;s still implicitly embedded in many university models.    </p></div></div>]]></content:encoded></item></channel></rss>