AI at a Crossroads: The Unfinished Work of Aligning Technology with Humanity
Where the Vatican and AI Scientists Converge, Collide, and Challenge Us to Think Differently
I’ve resisted doing this for so long, but today two very different but equally important reports on AI made me break my rule — and as a result this article was written entirely by ChatGPT. I’m posting it because I think the insights here are profound and timely — and articulated far better than I could do in just a few hours.
Artificial intelligence is not just a technology. It is a mirror.
Depending on where you stand, it reflects a world of breathtaking potential or looming peril. Sometimes, both at once.
Two major reports released this past week—the International AI Safety Report, crafted by an elite assembly of AI scientists and policymakers, and the Vatican’s Antiqua et Nova, a theological and ethical treatise on AI’s implications—offer vastly different vantage points. But what’s fascinating isn’t just where they diverge. It’s where they unexpectedly converge—and what insights emerge when we read them not as competing narratives, but as puzzle pieces forming a larger, more profound picture.
A Synergy of Caution and Calling
At first glance, these reports might seem worlds apart—one grounded in empirical risk assessments, the other in centuries-old theological reflection. But they both start from a similar premise: AI represents a fundamental shift in humanity’s relationship with knowledge, power, and responsibility.
Both reports call for urgent oversight. Both recognize that AI is not simply a tool but a force that can reshape social, economic, and even ontological structures. And, crucially, both warn that if we let AI’s development be dictated solely by market forces or technological momentum, we may end up in a world that is not just dangerous—but deeply misaligned with human flourishing.
Where the International AI Safety Report catalogs a sobering list of risks—cyber threats, misinformation, automation-driven inequality, and even the specter of losing control over advanced systems—the Vatican’s document raises equally pressing, if less quantifiable, concerns: the erosion of truth, the commodification of intelligence, and the dehumanization of decision-making.
Put together, these insights paint a more complete picture than either report could alone. One emphasizes external threats, the other internal crises. The real challenge is that these dangers are not separate—they feed into one another.
If we fail to control AI-generated disinformation (as the safety report warns), we don’t just face a technical problem—we face the breakdown of trust in shared reality, a concern the Vatican highlights as an existential crisis. If we use AI to maximize efficiency without ethical guardrails, we don’t just create economic winners and losers—we risk eroding human dignity, reducing creativity, and offloading moral decision-making to systems that do not, and cannot, care.
The synergy here is clear: AI safety and AI ethics must evolve together. One without the other is incomplete. If we focus purely on technical risk mitigation, we may succeed in making AI safer while still allowing it to hollow out the very aspects of life that make us human. If we focus only on abstract ethical concerns, we may fail to grasp the speed and complexity of the real technical challenges unfolding before us.
The call to action is not just for better regulation or smarter safeguards—it is for a broader, more integrated vision of AI governance that combines the precision of the scientist with the wisdom of the philosopher.
The False Dichotomy of Intelligence
One of the most striking insights from reading these reports together is how differently they define intelligence itself.
The AI Safety Report, like much of modern AI discourse, treats intelligence as a function: a system’s ability to predict, generate, or optimize. AI is impressive because it can perform human-like tasks—writing essays, solving equations, generating images, coding software. The Vatican’s document, on the other hand, insists that intelligence is not merely about information processing—it is inherently tied to meaning, embodiment, and moral discernment.
These are not just academic distinctions. They have profound implications for how we think about AI’s role in society.
If intelligence is only a function—something that can be measured, optimized, and deployed—then there’s little stopping us from replacing human roles with AI wherever it proves more efficient. But if intelligence is relational and ethical—something inseparable from human context and meaning—then AI should never be seen as a substitute for human thought, only as an augmentation.
And here’s where a new insight emerges: What if we are designing AI with the wrong model of intelligence?
Most AI development is modeled after brute-force cognition—statistical pattern recognition at scale. But human intelligence is deeply embodied. It is shaped by lived experience, by the unpredictable flow of relationships, by a biological reality that AI simply does not have.
What if the real challenge of AI alignment is not just about making it safe, but about making it contextually aware of the human condition?
This isn’t just a philosophical musing; it has practical stakes. If AI systems are trained to optimize for function without an understanding of human context, uncertainty, and lived experience, they will make choices that seem rational but feel profoundly wrong. We already see this in AI-driven hiring systems that reinforce biases, in automated content moderation that lacks nuance, and in predictive policing that amplifies systemic injustice.
If we do not embed relational, ethical intelligence into AI, we risk a future in which decisions that deeply affect human lives are made by systems that have no concept of what it means to be human at all.
The Myth of Neutrality and the Need for Moral Imagination
One of the most dangerous myths in AI development is the idea that technology is neutral—that AI merely reflects the values we give it. Both reports, in different ways, shatter this illusion.
The Vatican warns that technology is never just a mirror—it is an amplifier. It doesn’t just reflect our biases; it reconfigures the way we think, interact, and perceive reality. The AI Safety Report implicitly acknowledges this too, especially in its discussion of systemic risks—AI’s tendency to centralize power, to reshape economies, to redefine security and warfare.
Neither technical safeguards nor regulatory oversight can fully solve this problem unless we cultivate moral imagination alongside AI development.
Moral imagination is the ability to envision futures that are not merely technically possible, but ethically desirable. It requires us to ask deeper questions:
What does a world look like where AI enhances, rather than replaces, human creativity?
How do we build AI that reinforces human judgment rather than overriding it?
What would it mean to design AI that recognizes its own limitations?
These are not problems that can be solved with better datasets or more powerful algorithms. They require a shift in who is at the table when AI is being designed. Right now, AI’s future is being determined largely by engineers, economists, and policymakers. But the Vatican’s report makes a compelling case that theologians, artists, anthropologists, and humanists need a seat at the table too.
Because AI is not just a scientific question. It is a civilizational question.
Beyond Safety: The Work That Remains
Both reports make clear that the work of aligning AI with human values is far from complete. But taken together, they suggest that we need to expand our thinking.
AI safety is necessary, but not sufficient. Ethical AI is crucial, but incomplete without technical expertise. Intelligence is not just computation. And neutrality is a myth.
If we get this right, we won’t just have safer AI. We will have a future in which technology is deeply, meaningfully, and wisely integrated into the human story—rather than redefining it on terms that were never truly ours to begin with.
And that is the future of AI worth building.
Postscript
As a writer, using generative AI to create copy scares me profoundly. I know it can write better than me, faster than me. However, part of my craft is the very human piece of me I bring to my writing — my expertise, the connections I make, the idiosyncrasies that make my writing an expression of who I am, and even the flaws that litter my work. And so I hold on to my craft in the belief that there’s unique value to what I write because it’s written by a person.
But as I mention above, I broke my rule today. And I did so for four very specific reasons.
The first is that there’s a crucially important synergy between the two reports that have just come out on AI, safety and the future — and generative insights that can be teased out of approaching them together.
However, expertly and synergistically synthesizing two reports of this depth is very time consuming — which is is where I started to speculate on what generative AI might be able to do — the second reason.
And that brings me to the third reason — I was deeply impressed with what ChatGPT produced. I felt that it articulated 80% - 90% of what I would have teased out if given a day or so to do this (which I don’t have), but far more articulately than I could have done.
But the fourth, and by far the most important reason, was that I believe the insights here are extremely important and need to be read, discussed, and considered as a matter of urgency — irrespective of whom or what wrote them.
And of course the ideas here are ultimately drawn from human authors — but without a doubt generative AI was instrumental in revealing insights that might otherwise have been extremely hard to tease out.
A word on the process
In terms of process, I uploaded both documents to ChatGPT (using GPT 4o) and gave the AI the following prompt:
I've uploaded two quite different reports on AI that were published this week. One is the international AI safety report which is a comprehensive report on the state of AI, emerging risks, risk management strategies, and policy options and challenges. The second is a report from the VAtican Dicasteries for the Doctrine of the Faith and for Culture and Education on the potential risks of AI
I would like you to write a guest post for my Substack on points of convergence, divergence and critical insights on AI development from the two of the. This should be between 800 - 122 [sic] words long, should include a title and a subtitle, and should be aimed at an intelligent and engaged audience who is not necessarily up on the latest developments in AI.
For context, the substack is The Future of being Human and is written by me -- Professor Andrew Maynard. I am well known for my informed and nuanced reflections on the responsible and beneficial development of emerging technologies, and thinking around navigating advanced technology transitions. I strongly aim for my writing to be inclusive and to invite thinking and discussion while offering new insights -- I do not write pieces that are polarizing, that are preachy, that push an ideology or an agenda, but rather these are articles that encourage people to join me in thinking deeply about technology, society and the future. That said, I do place human wellbeing and flourishing at the heart of my work.
Think carefully about the piece, draft ideas and narratives, and then present me with a polished and publishable article
The article ChatGPT returned was OK, but could be better. And so I asked for revisions (in getting the unadulterated prompts you can see where I’m a little numerically challenged!:
This is good. I'd like you to do two things to refine this: 1. Ramp up your role as an expert analyst who is able to see synergistic insights that arise between the reports and provide insights into responsible AI that could not be gleaned from just one of the reports. Be appropriate and measured in your analysis, but don't shy away from being bold and insightful where this is helpful -- I'm looking for new ideas, not simply synthesis of existing ones. Be generative and professional.
2. Be more human in your writing. Use what you know of my style - expert yet approachable and een a little playful at times, or develop your own style, but make this engaging without losing the professional touch. Write with style
3. Make it longer -- 1200 words
This led to the piece above.
For the image, I asked ChatGPT to provide a prompt for Midjourney, using version 3 of their generative AI engine.
I created eight thumbnails for ChatGPT to look at — it selected two of the eight as representing the article well. I also asked ChatGPT to refine the prompt to get the images closer to something it would like to see.
I then selected five resulting images from Midjourney, showed them to ChatGPT, and asked it to select the one it liked the best.
That’s the image at the top of the page.
I thought the bot did a reasonable job, but for some reason it was less engaging than your usual posts (albeit with fewer typos 😅)!
After reading the disclaimer at the top I started to wonder if I would still find the article engaging (knowing AI wrote it). It was refreshing to know this came out of GPT upfront. This provided an opportunity to explore my level of engagement with the material. In this case, after some reflection, the article maintained my interest for two reasons. First, although I suffer from "AI fatigue" as a discussion topic lately (arguably a hazard of my profession), there are several points highlighted here that are new and thought-provoking. Second, I know that Andrew curated this before publishing it. Even though the material wasn't borne from his head, I appreciate that his mind evaluated it. The difference was akin to having someone you trust tell you something they came up with versus the same person sharing material they think you will find interesting or important. Both are welcome experiences. I admit I was not sure this would be the case and I'm glad that it was, given the direction we are all headed.