The "hard" concept of care in technology innovation
Does a lack of care in how we develop and use new technologies risk turning us and our creations into "monsters?"
A little over a year ago I was sitting in a meeting where some of the world’s leading experts were discussing emerging developments in AI. But what caught my attention was not the latest in frontier models or novel advances in machine learning, but a psychologist talking about the concept of care.
That psychologist was Alison Gopnick, and she was exploring the intersection between care, caregiving, and artificial intelligence.
Alison leads the The Social Science of Caregiving project at Stanford University, and studies, amongst other things, the nature and roles of care and caregiving in human society — something that’s intimately intertwined with what it means to be human. And this extends to the technologies we create, interact with, and that become an integral part of our lives.
Alison’s ideas grabbed my attention at that meeting because I was already beginning to think about how the the idea of care — and what might be called the “hard” concept of care1 as opposed to soft and wooly notions of care — plays into socially beneficial and responsible innovation. But I was approaching this from a very different angle.
I’ve been grappling with navigating advanced technology transitions for the best part of the past couple of decades. And part of this includes the complex challenge of managing the unexpected and unintended consequences of novel advances. As anyone in this field knows, this is not easy. But a series of casual conversations with my colleague Emma Frow had planted the seeds of a new way to think about socially responsible and beneficial technology innovation.
Emma has been working for some time now on how the concept of care can inform the development and use of new technologies — in particular in the area of synthetic biology. I’ve chatted with her about her work on numerous occasions, and each time I’ve been intrigued by how emerging thinking about care opens up unique and powerful ways of approaching innovation that align more closely with human wellbeing and flourishing — without necessarily stymying progress.2
Emma’s work comes from a very different place than Alison’s, drawing on thinking that’s grounded in feminist theory and science and technology studies.3 But despite the different approaches and disciplinary backgrounds there’s a common focus — how do we use a concept that’s so deeply engrained in defining who we are and what we value, to develop and use technologies in ways that genuinely and deeply benefit society at all levels; and do not cause unnecessary harm along the way.
I’ve been meaning to follow up on the sparks of thinking planted by both Alison and Emma ever since that meeting back at the tail end of 2023. But, as inevitably happens, a thousand and one other things got in the way.
And things may have stayed that way if it wasn’t for a new technology that’s — ironically — sorely in need of new thinking around how its developed and used: AI, and specifically OpenAI’s Deep Research model.
Regular readers of this Substack will know that I’ve been exploring ways of using Deep Research in my work since its release a few weeks ago. The more I use it, the more I’m discovering how powerful a tool it is for augmenting my own research and thinking. And one way that it’s proving to be transformative is in allowing me to explore questions that I just haven’t had the time to think about in a non AI-augmented world.
And so this past week I found myself unearthing those tentative threads of thinking around care and technology innovation, and working with Deep Research to flesh them out.
The resulting analysis — which you can access in the Notes section below — isn’t comprehensive. and it is limited to thinking about care as a concept in technology innovation more broadly rather than technology as a means to enable care. But it is good. Very good.
More importantly though, it allowed me to shift my thinking from a few disconnected thoughts to something far more deep and comprehensive — and all in a matter of hours.4
The analysis — which I would very strongly recommend reading — draws on work that underpins the thinking of Emma and others around care and technology innovation. Despite her research being part of the catalyst here, it doesn’t touch on Alison’s work — something that both reveals the limitations of what AI is currently capable of (powerful as it is), and the utility of combining advanced AI with human insights.5 But to someone relatively new to the field like myself, Deep Research’s analysis was the catalyst I needed to dig deeper and make further connections.
It chatting with Emma about the analysis over email, she highlighted a couple of areas that it didn’t touch on or go deeply into — including the ways that concepts of care can bridge scales of practice and decision-making from lab research to broad social and political landscapes, and how concepts of care can inform effective governance.
She also noted that the analysis didn’t emphasize how, in her words, a care-based approach “helps draw attention to what is often invisible or undervalued work (at least according to our current valuation systems),” noting that “it’s a way of reorienting ourselves to things that are commonly disregarded or forgotten, but in the end turn out to be crucial to sustaining life.”6
But then she’s steeped in this area of thinking and research — and even here, Deep Research’s analysis was acting as a catalyst to our conversation around care.
So what did Deep Research come up with, and how is this beginning to inform my own thinking around care and technology innovation?
Again, I would strongly recommend reading the full analysis that’s included in the Notes below. But some of the top takeaways for me include:
The importance of reframing our relationship with emerging technologies as one informed by care — care for the technology itself (which may sound odd, but makes sense if you read the analysis), as well as for the people and communities it potentially impacts, especially when they are particularly vulnerable to adverse consequences. Quoting Deep Research’s analysis, a growing body of literature “has begun to explore what it means to care for technology, and how a care orientation could reshape the development of advanced fields like artificial intelligence (AI) and biotechnology.”
The idea that technologies are inherently imperfect “monsters” that “require ongoing nurturance, adjustment, and stewardship — in short, care — if they are to benefit society rather than cause harm.” This idea draws from the work of Bruno Latour, Emma Frow, and others, and harks back to Mary Shelly’s Frankenstein as a cautionary tale of what happens when we don’t care for our creations.
The power of care as a concrete and transformative concept that can be rigorously folded into how we innovate as a way to avoid harm and increase benefits across all levels of society. As the analysis concludes, “‘care’ is
not a soft afterthought in innovation, but a critical ethos and practice for creating technology that truly serves humanity.”
The ways that thinking about innovation in the context of care encourage meaningful rather than performative action. For instance, drawing on the work of Jonathan Cohn, the analysis suggests that, rather than just ticking boxes of fairness and accountability within the context of AI, a “care-centric approach would start by asking: Is this technology serving the most vulnerable? Does it address real human needs and reduce harm?”
How a care-based approach to innovation at all levels allows a shift away from control (which can seem intuitive but also reduces the ability to be agile, flexible, creative and responsive) and toward human-centric adaptability. In its analysis Deep Research hypothesizes that “truly responsible innovation may require tempering the impulse to maximize control and efficiency with a willingness to engage in care – meaning attending to the unplanned, emergent needs that arise as technologies enter real life.”
Similarly, how a care-based approach to technology innovation can curb social injustices that are driven by a myopic push for power and impact. Again quoting the analysis, “Care ethics demands attentiveness to those who are most vulnerable or marginalized – in tech, this means asking who could be adversely affected or left behind by an innovation.” This, of course, may seem irrelevant to people who believe that we should be looking at the positive impacts of innovation on aggregate, but who have little time for the individuals who make up that aggregate. But it’s hard to imagine a truly humane future where we use technology to benefit the privileged at the expense of those who aren’t so privileged. This also resonates with the idea of “privileged irresponsibility” coined by the scholar Joan Tronto, where — quoting the analysis — “those in power (often unintentionally) escape their share of caring labor.”
The importance of drawing on diverse cultural concepts of care to ensure human flourishing in the face of globally transformative technologies. As the analysis notes, “Care is a universal value, but different cultures conceptualize it in unique ways that can enrich responsible innovation.” I explicitly asked Deep Research to explore this, and was interested to read the perspectives on care it synthesized that cover everything from Confucianism and Buddhism to Daosim and indigenous knowledge systems.
There are many other connections and insights in the analysis that are worth exploring. And together they point toward the need for deeper thinking around the "hard" concept of care in technology innovation — not simply what is considered to be touchy-feely and “nice,” but ideas and implementations of care that connect a fundamental aspect of what makes us human with the decisions and actions we make that, in turn, impact every aspect of our lives.
Of course, Deep Research’s analysis is limited. It focuses on a relatively narrow area of thinking around care and technology innovation. And there are many additional domains that could be encompassed here, including data ethics, robotics and human-machine interactions, health technologies, and more. But it does begin to make visible and accessible ways of thinking about care and technology innovation that are often buried within academic communities or specialized areas of research — and it paves the way for deeper dives into what might constitute a “hard” concept of care in technology innovation writ large.
And to that end, I want to finish with the definition of care within the context of technology innovation that Deep Research arrived at. This doesn’t explicitly reference the “hard” concept of care (this is language I’ve started to use since reading Deep Research’s analysis) — but it does implicitly begin to flesh it out:
Care is both a value and a practice of attending to specific needs and promoting the well-being of others (people, communities, or even environments) in a context-sensitive, responsive way. To care means to recognize others’ vulnerabilities and needs, to feel empathy or concern, and to take responsibility in responding with thoughtful action. Rather than applying one-size-fits-all rules, care is inherently relational – it unfolds through relationships and requires understanding the particular context of those involved. As feminist ethicist Virginia Held explains, care is fundamentally about the “caring relation,” focused on meeting individuals’ concrete needs in a manner tailored to that context. It involves emotional engagement (such as empathy or compassion) as well as practical efforts (work or skills) to maintain, continue, and repair our shared world.
In simpler terms, care in technology innovation can be defined as designing and deploying tech with an attitude of empathy and responsibility toward all those affected. This means prioritizing well-being, dignity, and trust at every stage: being attentive to users’ and stakeholders’ needs, taking responsibility to address those needs, exercising competence in execution, and remaining responsive to feedback or evolving circumstances (echoing the four elements identified by Joan Tronto). A care-based approach rejects the notion of users as abstract data points or “users” in the vacuum; instead it views people as interdependent beings, where technology should support relationships and communities rather than isolate or exploit.
Crucially, an ethic of care does not imply a soft or lax approach—on the contrary, it demands rigor and accountability in doing the work of care. It calls for continually asking, “Who might be harmed or left behind by this innovation, and how can we address that?” and “In what ways can this technology actively support the flourishing of its users and broader society?” By making care an explicit guiding value (alongside traditional values like efficiency or profit), technologists and policymakers can ensure that innovation remains humane and just. In summary, care is empathetic, responsive action that safeguards well-being and strengthens the web of relationships — a compass by which we can navigate responsible technology development.
Integrating these diverse perspectives, we arrive at a more comprehensive view of care in tech innovation: it spans technical domains (anticipating unintended consequences and protecting what we treasure), cultural wisdom (drawing on global philosophies of empathy, community, and harmony), scientific understanding (aligning with human psychological needs and neural wiring), and practical action (case studies proving care can be built into code, design, and policy). This broadened lens ultimately enriches the initial deep dive, reinforcing that “care” is not a soft afterthought in innovation, but a critical ethos and practice for creating technology that truly serves humanity.
And this brings us back to Alison Gopnick and AI.
While Deep Research didn’t draw on her research, or even on her domain of expertise, its exposition of care in the context of technology innovation ties in with her work around care, caregiving, and how the technologies we create and integrate into our lives promote human flourishing.
And this leads to a question that transcends disciplines and areas of expertise, and gets to the heart of what it means to be human in an era of increasingly complex technologies: Can we afford to divorce our thinking about our relationships with the technologies we develop, their relationships with us, and how these in turn modulate how we relate to other individuals, communities, and the systems we are intimately intertwined with, without placing the hard concept of care at the very heart of what we do?
I suspect that we cannot.
Notes
While the article above draws on more than Deep Research’s analysis, the ideas and thinking it reflects are only possible because of the way that AI is beginning to accelerate and catalyze the process of thinking, research, and scholarship. This whole exercise would not have been possible just over a month ago, showing how fast things are moving at the intersection of AI and research/scholarship. Yet I’m already incorporating OpenAI’s Deep Research into my day to day work.
The analysis that Deep Research generated in this case was very good. As I mention above, it’s far from perfect. It makes some connections that would be hard for a human researcher to do, but it still lacks the professional intuition of a good polymathic scholar. That said, as a catalyst or accelerant to thinking and scholarship, it’s a game changer.
The report itself is worth reading both as an example of what Deep Research is capable of producing in a few minutes, and as a foundation document for anyone exploring the intersection of care and technology innovation further — and, in my words, the “hard” concept of care in technology innovation.
The report below is a formatted version of Deep Research’s response to two consecutive requests — both requests are included in the annexes:
I find this idea of the “hard” concept of care in technology innovation useful in two respects. First, it distinguishes care in this context as something of substance that can be used as a basis for policy, governance, and decision-making, as opposed to a “soft” conceptualization of care as something that’s more touchy-feely but less easy to operationalize. And second, it attests to the difficulty of the challenge of developing and applying this concept — important as it is.
There’s a great article by Emma and colleagues Erika Szymanski and Joshua Evans Gingko Bioworks GROW where she describes her thinking on care: https://www.growbyginkgo.com/2024/03/28/beyond-control/
I was sorely tempted to rather facetiously add in a trigger warning here for any readers who immediately started to reach for the “unsubscribe” button after being traumatized by phrases like “feminist studies” and “Science and Technology Studies.” Of course, it would have been unnecessary as I’m sure all readers of this Substack understand the importance of looking beyond labels to learn from different ways of thinking about and understanding the world …
It’s hard to overstate how transformative this is. Ideas that I might never get around to exploring, or which would take weeks or months if and when I did get round to them, can now be pursued with ease and efficiency. The trick, of course, is using AI as a catalyst to human-initiated thinking and research, rather than as a substitute.
This is most likely due to how I framed the question I posed to Deep Research — which is included in the analysis’ annexes.
As an aside, I love this insight from Emma. As we develop and use technologies—or make decisions in a technology-dominated world—how often do we naively ignore, undervalue, or simply overlook factors that are critical to human well-being, simply because we are too arrogant to look beyond the narrow confines of our own hubris?