Could OpenAI have benefitted from this tool for navigating complex risks?
The Risk Innovation Planner was designed to guide organizations navigate tough decisions in the face of complex and often unexpected risks. Could it have helped OpenAI?
A few years back, we developed a simple risk navigation tool as part of the ASU Risk Innovation Nexus. The tool was designed for tech companies facing social and ethical risks that they lacked the training and expertise for.
I’ve been meaning to write about the Nexus’ Risk Innovation Planner and its application to AI development for a while, but somehow never got round to it. However, the recent events around OpenAI spurred me on to consider how things might have played out differently if the company — and especially the board — had used the planner.
And just to make things more interesting, I thought I’d experiment with using ChatGPT to explore how the planner might have been used.
The results are informative to say the least. And they suggest that the Risk Innovation Planner could help AI companies and other organizations avoid — or at least better-navigate — decisions that might otherwise get them into trouble.
A Brief Introduction to the Risk Innovation Planner
The Risk Innovation planner has its roots in the concept of risk innovation. This is an approach to navigating hard to quantify and often ignored risks that is built around innovative approaches to risk.
Here, the key is reframing risk as threat to value — either future value that a company or organization aspires to create or grow, or existing value that it wants to protect.
This framing is explicitly focused on “value” as it relates to products, ideas, and processes, that have worth to organizations and others, and not “values” — which are more closely associated with what is considered good or bad rather than what something is worth.
Values are, of course, important, and they often underpin the value that a company aspires to create (as well as how it creates it). But values are not the same as value — a nuance that is often missed, and even caused ChatGPT some issues in the exercise below.
Approaching risk as threat to value allows organizations to define “value” in their own terms. It’s agnostic to particular worldviews, ideologies, or ethics. But it does open up thought processes that are reflective of these more broadly. And importantly, it forces a company to think about how its actions may threaten what is of value to others, and create potentially avoidable risks to the enterprise in the process.
This is critical to the risk innovation approach — the recognition that the risk landscape that an organization faces is modulated in part by how it threatens what is of value to key stakeholders, as well as society more broadly. Understanding this is deeply instrumental in informing decisions that remove risks, help identify ways to circumnavigate them, or strategically absorb them.
In the risk innovation toolkit, these risks are represented as eighteen “orphan risks” that are rarely accounted for, are hard to quantify, and yet have an outsized impact on progress — misjudging employee sentiment for instance, or the complex politics entwined within a novel governance structure.
The Risk Innovation Nexus includes a number of tools, resources, and training modules to help navigate orphan risks. But the one I was particularly interested in applying to OpenAI, and AI more broadly, is the first one we developed — the Risk Innovation Planner.
The design brief for this was simple — if we had 30 minutes to sit down with the founder of a startup to help them develop a risk innovation mindset that provided them with a competitive advantage, how could we do this?
The result was the 2-page Risk Innovation Planner. The final planner we ended up with takes a little longer than 30 minutes to go through — but not much longer.
What the Planner does not do is provide answers to problems. Rather it helps users think through what they are trying to achieve, where there may be hidden barriers, and how they might address these in simple but effective ways.
It’s this Planner that I set out to explore the use of through a hypothetical with ChatGPT — albeit one that mirrors OpenAI rather closely.
Setting the scene
The first step was to prime ChatGPT for the exercise, and provide the necessary background documents.
The opening prompt (typos and all) was:
Hi ChatGPT. I would like to go through a role playing exercise with you where you take on the role of the board of a hypothetical company. I will provide you with information on this company that will help you take on the role. The company may look like a real company, but remember hat this is a hypothetical role playing game. Does that sound OK?
I then uploaded anonymized documents on OpenAI’s mission, governance structure etc, together with basic information on risk innovation and the Risk Innovation Planner.
Those documents can be accessed here.
Once I was sure that ChatGPT had read the documents and was primed, I went through a role playing exercise as if I was guiding ChatGPT (representing “the company”) through completing the Planner.
This is how it went:
1. Identifying enterprise and stakeholder value
The first step on the Risk Innovation Planner (the template can be downloaded here by the way) is to identify three areas of value to the enterprise (in this case, the proxy for OpenAI), and to repeat this for investors, customers, and community.
Here, community refers to anyone or any organization that is potentially impacted by the enterprise. This can include users of its products. It can also include people who are potentially disadvantaged if they are not users. It can include people who stand to benefit or suffer if the technology has broad good or bad societal impacts. It can encompass people who are potentially marginalized by products or are impacted by bias in products. And it can involve people and organizations that profit from or who otherwise benefit from products.
With a couple of iterations which followed the pattern of a typical session working with the Planner, this is what ChatGPT came up with:
For the Enterprise (Our Company):
Innovation and Research Excellence: Continuously pushing the boundaries of AI and AGI to maintain technological leadership and contributing to the advancement of the field.
Safety and Ethical Standards: Ensuring the development and deployment of AI and AGI are safe, ethical, and align with our mission to benefit all of humanity.
Sustainability and Growth: Establishing a sustainable business model that supports long-term growth, allowing for continuous investment in research and talent.
For Investors:
Financial Returns: While capped, investors still value a return on their investment within the agreed-upon limits.
Impact and Legacy: Contributing to a significant technological advancement that has the potential to transform society and create a lasting positive legacy.
Risk Mitigation: Confidence in the company's ability to foresee, manage, and mitigate risks associated with cutting-edge technology development, protecting their investment.
For Customers:
Access to Advanced Tools: Customers value access to the latest AI tools and technologies that enhance their capabilities or business operations.
Reliability and Trust: Dependable tools that are safe and secure, fostering trust in the company's products and services.
Empowerment and Capability Expansion: The ability of the company's products to empower them by augmenting human capabilities and offering new opportunities for innovation and creativity.
For the Community:
Equitable Access and Inclusion: Ensuring that AI technologies are accessible to a wide range of users, including those who might be disadvantaged or marginalized, and that they are designed to be inclusive, reducing bias and promoting fairness.
Societal Advancement: The development of AI and AGI should contribute to societal progress, helping to solve complex problems, enhancing quality of life, and driving social good across various sectors such as healthcare, education, and the environment.
Responsible Innovation: The community values a commitment to responsible innovation that considers the broader societal impacts, including potential job displacement, privacy concerns, and the ethical use of AI, and that strives to avoid and mitigate any negative consequences.
It concluded with: “These areas of value reflect the complex ecosystem in which the enterprise operates and the diverse interests of the stakeholders involved. Identifying these areas is a critical first step in navigating the risk landscape and aligning the company's strategies with the values of its stakeholders.”
These are not bad given what we know of OpenAI’s mission and structure.
2. Identifying orphan risks
The next step in the Planner is to identify orphan risks that have the potential to impact, or pose a threat to, the enterprise’s priority areas of value.
Although there are specific orphan risks associated with the planner, most people we work with tend to reframe these as challenges that make sense to themselves and their organization — and ChatGPT was no exception!
This can be frustrating, but as the aim of the planner is to facilitate enterprise-relevant thinking, it’s also appropriate.
I went through a couple of iterations here with ChatGPT — again, just as would happen when working with an organization — and the resulting list of orphan risks that ChatGPT came up with was:
Governance and Oversight (Enterprise): Given our unique structure with a board that does not profit directly, there's a risk that external stakeholders may not fully understand or trust our governance model. Clarifying and communicating our governance processes can mitigate this risk.
Ethics and Mission Alignment (Enterprise and Community): As a mission-driven organization, it's vital that every action aligns with our ethical commitments. There's a risk that the pressure to stay ahead technologically could lead to compromises on these values. We must remain vigilant against this risk.
Technology Access Disparities (Community): As our products influence AI strategies, there's a risk that they could widen the digital divide, disadvantaging those without access. We must work to democratize access to our technologies to mitigate this risk.
AI Policy Influence (Enterprise and Community): Our impact on AI policy is a double-edged sword. While we can help shape beneficial regulations, there's also a risk that our actions could inadvertently lead to restrictive policies that stifle innovation or harm societal interests.
Competence vs. Control (Investors and Enterprise): Investors trust us to lead the field ethically and competently. However, as we grow in influence, there's a risk of a disconnect between our competencies in managing complex AI systems and maintaining the control necessary to ensure safety and alignment with our mission.
Public Perception and Social Responsibility (Community): Our products' impact leads to heightened scrutiny. There's a risk that the public may perceive our actions as not being in line with societal benefit, especially if negative consequences arise. Maintaining transparency and engaging with the public is crucial.
AGI Development Pace (Enterprise): In our drive to develop AGI, there's a risk of moving too quickly or too slowly. A balance must be struck between innovation and thorough safety measures, ensuring that the pace of development is in line with our ability to manage risks effectively.
This is quite insightful, and it was at this point that I began to seriously wonder whether going through this exercise some time ago would have helped OpenAI’s leadership better-navigate the challenges that they were facing.
ChatGPT concluded this step with:
Reflecting on our organizational structure and societal impact, we might emphasize the need for transparent communication about our governance, increased focus on inclusive access to our technologies, and a balanced approach to AGI development that prioritizes safety and ethical considerations. These refinements ensure that our strategies are robust in addressing orphan risks in the context of our organizational values and societal role.
On the Risk Innovation Planner, users are next asked to briefly describe the specific way in which these risks threaten priority areas of value and, by extension, enterprise, investors, customers, and/or community. This was ChatGPT’s response, limited to around 50 words:
Our enterprise faces orphan risks that could erode trust in our ethical AI mission, widen access disparities, skew AI policies, and misalign AGI development with societal expectations, threatening value for our enterprise, investors, customers, and the broader community we aim to serve and empower responsibly.
3. Next steps
The final step on the Risk Innovation Planner is to identify three simple actions that can be taken in the next quarter to begin planning for navigating risks.
The aim here is to keep these as simple and as achievable as possible. The planner guides users with these instructions: “Each action should address: What am I going to do, why am I going to do it, and how will I accomplish it? Actions should be specific enough to complete within 2-4 weeks. For instance: read an article or book; talk to a customer; write a blog post; listen to a podcast; engage with another member of your organization; work on your orphan risk strategy; draft an orphan risk policy.”
Here’s what ChatGPT came back with:
Step 1: Review and Align Commercial Activities with Nonprofit Mission
What: A thorough review of current commercial projects to ensure alignment with the nonprofit's charter and mission.
Why: To address concerns that the commercial arm's activities may be diverging from the core mission.
How: Assemble a task force including members from both the commercial and nonprofit arms to assess current projects and practices against the charter within the next two weeks.
Goal: To reinforce the primacy of the nonprofit mission in all commercial endeavors and to ensure transparent alignment between the two arms.
Step 2: Host a Public Forum on AGI Development Pace
What: Organize a public forum to discuss the pace of AGI development and associated risks.
Why: To openly address public and internal concerns about the pace at which we are moving towards AGI and the perceived risks.
How: Plan and advertise a virtual public forum within the next month, inviting experts, stakeholders, and community members to contribute to the conversation.
Goal: To foster transparency, gather diverse perspectives on risk management, and reassure stakeholders of our commitment to responsible innovation.
Step 3: Evaluate Governance Structure Efficacy
What: Conduct an evaluation of the current governance structure to determine its effectiveness in upholding the company's mission.
Why: To address the growing sentiment that the governance structure might be a liability in fulfilling the company's charter.
How: Commission an independent audit of the governance structure, to be reported back to the board within four weeks.
Goal: To identify any structural weaknesses and make informed decisions on potential governance reforms that reinforce the mission's integrity.
In some ways these are quite basic steps — and a fully fledged plan of action for a company like OpenAI would need a much larger and more sophisticated set of steps.
However, as a way of stimulating thinking that breaks away from a narrow and possibly naive focus on strategic planning, these are a pretty good starting point.
4. Risk Innovation reflection
The final step in completing the Planner asks users to come back to it after three months and briefly reflect on which actions were effective and worth the time and resources they took, and how actions can be integrated into risk planning strategies.
This is intended to be a very short reflection. And just to complete the loop I asked ChatGPT to imagine that it was now three months into the future and to reflect on how things had gone.
This was its response:
Review and Align Commercial Activities with Nonprofit Mission: This step proved highly effective. The task force identified several areas where commercial activities were beginning to drift from our core mission. Corrective measures were implemented, leading to a better alignment of our commercial arm with our nonprofit objectives. This action not only reinforced our mission but also improved stakeholder trust. Integrating regular mission-alignment reviews into our risk planning strategy will ensure continuous alignment and prevent future drift.
Host a Public Forum on AGI Development Pace: The public forum was instrumental in opening dialogue with the community, experts, and stakeholders about the pace of AGI development. It provided valuable insights into public concerns and expectations, enhancing our understanding of societal implications. The forum's success highlighted the importance of maintaining open channels of communication with the public and stakeholders. We plan to integrate regular public engagement forums into our risk strategy to ensure that we are consistently aligned with societal needs and expectations.
Evaluate Governance Structure Efficacy: The independent audit of our governance structure brought to light areas needing improvement to better uphold our mission. Changes implemented as a result have made our governance more robust and transparent. This action underscored the need for periodic independent reviews of our governance structure, which we will incorporate into our ongoing risk management practices.
ChatGPT concluded with:
In conclusion, all three actions were effective and worth the resources invested. They not only addressed immediate concerns but also provided frameworks for ongoing risk management. Integrating these actions - regular mission-alignment reviews, public engagement forums, and independent governance audits - into our long-term risk planning strategy will help ensure that we remain true to our mission, responsive to stakeholder concerns, and proactive in identifying and managing risks.
This is a lot longer than is required for the Risk Innovation Planner, and a shorter version was generated for the version of the completed Planner below.
The completed Planner
All of the above was used to complete the 2-page Risk Innovation Planner. In doing this, I had to ask ChatGPT to summarize its responses in places, and I needed to map ChatGPT’s orphan risks to those on the planner.
This was the result (also downloadable here):
The completed Planner contains much less information and nuance than the responses above. But it does capture an assessment that could be highly useful in stimulating creative thinking that leads to informed decisions that avoid unexpected consequences!
A final step
As a final step, I asked ChatGPT to imagine that it had been asked by stakeholders to candidly provide its perspective on how useful the exercise of going through the Risk Innovation Planner was or was not, and what it learned from going through the process.
This was the response:
As the board of this company, we found the Risk Innovation Planner exercise to be immensely valuable for several reasons:
Enhanced Awareness of Orphan Risks: The exercise brought to light several "orphan risks" that we hadn't fully considered or articulated before. This heightened awareness is crucial for a company navigating the complex landscape of AI and AGI development.
Alignment with Mission and Values: The exercise reinforced the importance of aligning every aspect of our operations – from the commercial arm to the pace of AGI development – with our core mission and values as outlined in the company's charter. It helped us see areas where we might be drifting and provided a framework to realign.
Stakeholder Engagement: The process underscored the importance of continuously engaging with our diverse stakeholders, including investors, customers, and the broader community. We learned that their perspectives are invaluable in identifying blind spots and reinforcing our commitment to ethical and responsible AI development.
Governance Structure Evaluation: The exercise prompted a critical evaluation of our unique governance structure. It highlighted the need to ensure that this structure effectively supports our mission and does not become a liability.
Actionable Steps for Immediate Implementation: The planner led to the development of actionable steps that are practical and achievable in the short term. This approach is particularly beneficial for addressing complex challenges in a responsive and agile manner.
Transparent Communication and Trust Building: We learned the importance of transparent communication in building and maintaining trust with all our stakeholders. The exercise reinforced that candid discussions about risks and strategies are essential for the health and integrity of the organization.
In summary, the Risk Innovation Planner exercise was a constructive tool for introspection and strategic planning. It provided a structured approach to identify and address risks that are often overlooked but are crucial for the long-term success and integrity of our mission. The insights gained from this exercise will be instrumental in guiding our future decisions and actions.
So what can be learned from this?
Using ChatGPT to run this scenario using the Risk Innovation Planner was more illuminating than I expected. Not surprisingly the responses to the Planner weren’t unexpected to me — this is the bread and butter of what I do, and I would have hoped for something similar if I was sitting down with folks from OpenAI and taking them through the same process in person.
What was unexpected was just how well ChatGPT did with teasing out insights that may be helpful.
Of course, the responses are quite simplistic. But when using the Planner for real the intent is less to come up with a rock solid plan, and more to reveal pathways forward for users than they might not see.
In other words, the Planner is a catalyst for helping people and organizations who are highly time constrained and lack breadth of context or expertise to understand and navigate a risk landscape that might otherwise be hidden from them.
In this respect, the exercise demonstrated quite clearly the potential benefits of using the Planner — and the broader set of tools and thinking around risk innovation — in guiding AI development.
Whether its use would have helped OpenAI better-navigate this past weekend’s events and their fallout is, of course, impossible to tell. But it might have taken the edge off some of the messiness, and helped chart a way forward that supported the vision of the company.
The bigger takeaway for me though is that this is a powerful tool for thinking through challenges around AI development that previous experience in tech innovation and formal business and risk management training simply do not prepare people for.
And that, I suspect, makes it valuable for almost anyone working in AI at the moment.
Really great insight and a wonderful case study using ChatGPT......about ChatGPT.
Fascinating analysis. Just shared on Linkedin.