I asked Open AI’s ChatGPT about responsible innovation. This is what I got
The just-released conversational chatbot has an impressive ability to convey complex concepts in understandable language
There’s been a flurry of interest in Open AI’s ChatGPT bot since was opened up to the public last week — and for good reason. The bot is remarkably good at providing informed responses and engaging in conversations that feel authentic.
Since ChatGPT’s release there’s been a lot of interest in how human the bot seems, and how good it is at pulling together information in coherent ways. It can answer simple questions with clarity. It can converse fluidly with users. It can help with writing and debugging code. It can even complete course assignments for students!
Playing around with the ChatGPT interface though, I was particularly interested in its ability to synthesize information in ways that make it accessible and understandable to a broad audience — especially around concepts that are complex, obscure, and often conveyed in such convoluted ways by experts that they become impenetrable.
And where better to start than with my own field of socially responsible and ethical technology innovation.
“ChatGPT, what is responsible innovation?”
To explore ChatGPT’s ability to convey complex ideas in simple ways, I fed it four questions associated with responsible innovation, starting with a couple of definitions, and going on to a couple of more in-depth questions.
The results were impressive:
Me: What is responsible innovation?
ChatGPT: Responsible innovation is an approach to innovation that considers the potential consequences of new technologies and innovations on society and the environment, and takes steps to address and mitigate any negative impacts. It is based on the idea that innovation should be guided by ethical and social values, and that the development and use of new technologies should be done in a way that is sustainable, inclusive, and responsible. Responsible innovation involves considering the potential impacts of new technologies on a range of stakeholders, including consumers, workers, and the environment, and taking steps to ensure that these impacts are positive and beneficial. This can involve conducting research and analysis to understand the potential consequences of new technologies, engaging with stakeholders to gather their input and feedback, and implementing measures to manage and mitigate any negative impacts. Overall, responsible innovation is about taking a proactive and responsible approach to the development and use of new technologies in order to ensure that they benefit society and the environment.
Public Interest Technology
Me: What is Public Interest Technology?
ChatGPT: Public interest technology is a field that focuses on the use of technology and data to address social and public policy challenges. It involves the development and application of technologies and data-driven solutions to improve the delivery of public services, enhance government accountability, and address societal challenges such as inequality, poverty, and environmental degradation. Public interest technology practitioners work in a variety of settings, including government agencies, non-profit organizations, and the private sector. They use their expertise in technology and data analysis to develop solutions to complex social and policy problems, and to support the effective and responsible use of technology in the public sector. Public interest technology is a growing field that is increasingly being recognized as an important part of the broader effort to address social and policy challenges using technology and data-driven solutions.
The relevance of studying responsible innovation
Me: Why should we study the societal consequences of technology innovation?
ChatGPT: There are several reasons why it is important to study the societal consequences of technology innovation. First, technology has the potential to significantly impact society and individuals in both positive and negative ways. By studying the societal consequences of technology innovation, we can better understand the potential impacts of new technologies and take steps to mitigate any negative consequences. Second, studying the societal consequences of technology innovation can help us identify potential areas for further research and development. For example, if we identify a negative societal impact of a particular technology, we can work to develop solutions or alternative technologies that address that impact. Finally, studying the societal consequences of technology innovation can help us ensure that the development and use of new technologies align with ethical and social values, and support the common good. This can help promote the responsible and sustainable use of technology, and ensure that the benefits of innovation are widely shared.
Employing people with training in Responsible Innovation
Me: Why should technology companies employ people who understand the social, ethical, and legal consequences of emerging technological capabilities?
ChatGPT: There are several reasons why technology companies should employ people who understand the social, ethical, and legal consequences of emerging technological capabilities. First, having a diverse team with a range of expertise and perspectives can help technology companies anticipate and address potential ethical and legal issues that may arise from the use of new technologies. This can help companies avoid negative consequences and protect the reputation of the company. Second, having employees who understand the social, ethical, and legal consequences of emerging technologies can help technology companies align their products and services with ethical and social values. This can enhance the company’s public image and build trust with customers, investors, and other stakeholders. Finally, having employees who understand the social, ethical, and legal consequences of emerging technologies can help technology companies develop innovative and responsible products and services that address important societal challenges and support the common good. This can help companies create value for their customers and contribute to the broader goals of society.
All of these responses could have been written in principle by experts in the field — there’s not new information here, and no novel insights. But you’d be hard pressed to find experts who could give responses that are as clear and understandable as these.
This, to me, is a potential game changer — the ability to develop machine learning algorithms that augment our ability to communicate complex concepts to a diverse set of audiences.
Beyond the hype and the fear around what models like ChatGPT (and what follows on from them) are capable of, their ability to enhance our capacity for effectively conveying knowledge and understanding to others shouldn’t be underestimated.
From the examples above, there are strong indications that ChatCPT can articulate concepts for broad audiences with far greater clarity and accessibility than many experts — believe me, this is my field, I know!
Of course, people who know their stuff are going to quibble over nuances. But the fact remains that the explanations above of responsible innovation, public interest technology, and their value to society, are more accessible to more people that most of what I’ve seen coming out of experts in the field to date.
This opens up interesting opportunities to work with models like ChatGPT to vastly improve public communication and engagement, and to ensure potentially transformative ideas are articulated in ways that resonate with people who can benefit from them.
From what I’ve seen and experienced already, this is something that needs to be explored more, all the way from academics conveying complex ideas to colleges communicating why their courses are relevant, and much more besides.
Going Beyond Responsible Innovation
Of course there’s a meta-layer to this quick and dirty experiment with ChatGPT, and that’s the responsibility that comes with developing and using such models.
Developing machines that can interact with us in very human ways while seeming informed, engaged, and attentive, comes with a whole raft of risks that we’re just beginning to unpack. These range from the dangers of unhealthy attachments and gullibility, to algorithmic bias in processing and responses, and on to the presence of misinformation, the propagation of disinformation, and even the emergence of corrosive hate speech.
Navigating this risk landscape is going to need new thinking around how we develop powerful technologies like this in socially responsible ways, and how we are going to ensure the growth of social as well as economic value from them.
This is precisely why I was interested in how ChatGPT articulated the concepts that are needed to ensure it’s responsible and beneficial use — concepts like responsible innovation and public interest technology.
The good news is that the model is surprisingly adept at helping to clarify these ideas, and their importance in developing new technologies in beneficial ways. There is, of course, something of a serendipitous irony in AI being instrumental in providing pathways to overcoming the challenges it presents us with. But it’s one we should be running with.
However, the potential here lies far beyond getting responsible innovation right. Imagine using ChatGPT and similar models to enhance science communication, to facilitate engagement around complex topics, to inform innovators, policymakers and others of the challenges and opportunities they face, to increase understanding and awareness throughout society of advances in science and technology and how they potentially impact our lives.
In essence, we are looking at a leap in machine-augmented communication and engagement. And for anyone who has struggled with getting complex ideas across to others, this could be a game changer.
As long as we do it responsibly!