What could Sam Altman's departure from OpenAI mean for societally beneficial AI?
While this is still a breaking story and speculation is rife, the sacking of Altman as OpenAI's CEO could have far-reaching consequences for AI and it's societal impacts
Ironically, I was in a presentation on the cutting edge of AI on Friday when news broke that Sam Altman had been removed as CEO of OpenAI. It was one of those moments where you could see a wave of distraction spreading through the room as people stopped listening to the speaker and switched to their social media feeds.
As I’m writing, it’s still unclear what precipitated OpenAI’s board’s decision to oust Altman, or what the consequences might be — but it’s hard to imagine this move not shaking up the AI landscape quite significantly.
The early indications are that growing tensions between OpenAI’s commercial arm and its not-for-profit mission came to a head, accelerated by Altman’s announcements at the company’s DevDay a couple of weeks ago.
In their statement, the board claimed that Altman was “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities”. What this means exactly is currently in the realm of supposition and conjecture. But there are hints coming from commentators and experts who are familiar with OpenAI that there were growing concerns over both the direction that Altman’s personal agenda was taking the company, and how this was pulling OpenAI from its mission to build “safe and beneficial artificial general intelligence for the benefit of humanity”.
As founder of Fast.ai Jeremy Howard noted on X, “I had a feeling I knew what this was about...” quoting a seemingly obtuse September 29 post from Open AI chief scientist (and board member) Ilya Sutskever saying “Ego is the enemy of growth.”
Of course, all of this is speculation at this point. However, as the implications of Altman’s ousting unfold, it’s worth keeping a close eye on three frames, and the deep intersections between them: Those covering technological advances in AI, the governance and regulation of AI, and the broader societal impacts of AI.
Advances in underlying AI science and technology
OpenAI is, without a doubt, a heavyweight in the world of AI advances and the underlying foundation models. The GPT models continue to be some of the most powerful large language models around, and their integration with image and voice recognition/generation technologies is impressive. Yet OpenAI isn’t the only game in town, and I suspect that a change of leadership isn’t going to have too great an impact in the technologies that underlie AI more broadly.
I’ll leave it to those who are more deeply involved in the technical foundations of emerging AI to speculate more extensively about how changes in OpenAI’s leadership may alter this landscape. But my sense is that with models like Anthropic’s Claude, Meta’s Llama, and a growing ecosystem of other open access and commercial models — never mind those being developed by universities and other research institutions — there won’t be too much of a perturbation around next steps in AI development.
Where we may see a shift though is in the underlying ethos behind this development, and the speed at which advances find their way into commercial or otherwise publicly-accessible products.
The OpenAI board’s re-articulated commitment to the organization’s charter — which highlights broadly distributed benefits, long-term safety, technical leadership, and cooperative orientation — may be important here.
Governance of AI
OpenAI, and Sam Altman in particular, have been deeply involved in national and global deliberations around AI governance over the past twelve months.
Despite strong pushes from some quarters for promoting open-source AI as essential for responsible, agile, and distributed governance, there has been consolidation around more powerful AI systems (especially so-called “frontier models”) being closed and closely regulated. Here, leading companies — OpenAI amongst them — have had an outsized influence in guiding the framing of regulations that seemingly favor commercial leaders in the field.
How Altman’s departure from OpenAI potentially changes things here isn’t clear, but this is an area to watch closely. On one hand, the company has an “in” with regulators, and may be in a position to wield influence that moves the regulatory needle toward frameworks that favor the broader emergence of AI capabilities that are more closely aligned with benefiting humanity writ large.
Yet this depends largely on whether policy makers were listing to the company, or to Altman.
My suspicion is that Altman was the “shiny person” they were interested in. If so, he will continue to have influence in how policies evolve, and there’s a danger that OpenAI will sink into relative policy obscurity (although I hope this isn’t the case).
Whichever way things go, the next several months of policy and regulatory wrangling are going to be important for the future of AI, and Altman’s dismissal may open a sliver of a window of opportunity to nudge things in new directions.
AI and society
While the governance and technological implications of Altman’s departure are important, I suspect that the greatest impacts will be felt — at least initially —within society more broadly.
In the jargon of science and technology studies, AI is part of a complex socio-technical system, meaning that the dynamics between the technology, policy, commercialization, communities, individuals, and more, are deeply intertwined and influential. In other words, AI isn’t just about the technology, but about the society in which it’s developed and the multidimensional factors and influences that come into play here.
As a result, it’s impossible to speculate on the technological and governance implications of OpenAI’s decision without understanding how these are situated within society we are all a art of .
It’s fair to say that, over the past year, OpenAI has had a profound impact on society — and one that isn’t commensurate with technological breakthroughs alone.
Educators have quite literally had to re-examine and re-structure how they teach, following the launch of ChatGPT.
Universities have scrambled to set up AI initiatives, accelerate the development of AI tools, provide funding for AI breakthroughs, and invest in AI-driven research — all driven by a hype and promise filled generative AI revolution sparked by ChatGPT.
And OpenAI’s competitors have put the metaphorical pedal to the metal in developing and deploying their own models — often far faster than a measured and responsible approach would suggest is wise.
Before November 30, 2022 there were powerful large language models that were available to developers and early adopters. But the launch of ChatGPT and the current availability of the subscription-based ChatGPT Plus changed everything. This was a social and commercial as much as a technological step, and it led to a cascading series of events that changed society.
How this is likely to play out with a change of leadership in OpenAI — and most likely a change in direction accompanying this — is too early to say. But in a “socio-technical system” where ChatGPT and OpenAI have so much influence, any changes in direction and action are likely to have important yet extremely hard to predict impacts.
This is where I suspect we’ll see some of the greatest disruption over the next few months. This may be a good thing — a reset from the hype of the past year, and a chance to take a breath (a pause even) and rethink and recalibrate the relationship between AI and society, and how this can bring about a better future for as many as possible.
Alternatively, it could put many plans into a tailspin and have unpredictable social outcomes that aren’t so good as a result.
I guess we’ll see as the situation unfolds. But one thing is clear — this is a point where forward thinking, agility, and a strong commitment to socially responsible and beneficial AI, are going to be more important than ever.
Update, Nov 20, AM. After a tumultuous weekend where, at some points, it seemed that Sam Altman would return to OpenAI, it was announced late Sunday night by Microsoft CEO Satya Nadella that Altman and former President (and co-founder) of OpenAI Greg Brockman, together with colleagues, would be joining Microsoft “to lead a new advanced AI research team”.
I suspect that this is just the start of the next act of this story, and there’s plenty more drama and disruption to come — which makes the potential disruptions to AI governance and societal impacts all the more relevant. The Microsoft move may also have a substantial impact on technology development as Altman’s new team are given new resources and OpenAI’s team is potentially depleted — we’ll see.
Update Nov 22, AM. OpenAI announced late Tuesday night that an agreement has been reached whereby Sam Altman returns as CEO of the company:
We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.
We are collaborating to figure out the details. Thank you so much for your patience through this.
On the face of this it feels like things are getting back to where they were after a tumultuous few days. However, with the new board and relational/personality shakeups this is likely to have long term ripple effects on the socially responsible development of AI.
The story isn’t over yet!