Andrew, your piece here is especially relevant given the recent implosion of the board at OpenAI and all of the turmoil around Sam Altman and the direction (and speed) at which they are improving their products. If AI risk, as you point out, is difficult to assess, then it just got harder. It seems the “business” side of OpenAI won out over the “non-profit” (effective altruism)side. But perhaps this may have also elevated the awareness of the approach by companies like Anthropic who seem to have a more cautious and thoughtful “constitutional” framework for AI. There is no doubt that companies implementing AI solutions will likely hedge their bet (less risk) by considering multiple LLMs rather than hitching their AI horse to one wagon.
Donald Rumsfeld famously spoke about "Unknown Unknowns" several years ago and recently I've read about the theory that separates risk from uncertainty, which reserves the word risk for conditions with known probability. Whatever you call it, AI clearly falls into the less knowable category.
Yep -- this is one way to do it, and which is why there's growig interest in uncertainty science. Yet we still need to have approaches that deal with consequences, which is why we ended up adopting a definition of risk as threat to value, along with the idea of orphan risks -- https://futureofbeinghuman.com/p/navigating-orphan-risks
Andrew, your piece here is especially relevant given the recent implosion of the board at OpenAI and all of the turmoil around Sam Altman and the direction (and speed) at which they are improving their products. If AI risk, as you point out, is difficult to assess, then it just got harder. It seems the “business” side of OpenAI won out over the “non-profit” (effective altruism)side. But perhaps this may have also elevated the awareness of the approach by companies like Anthropic who seem to have a more cautious and thoughtful “constitutional” framework for AI. There is no doubt that companies implementing AI solutions will likely hedge their bet (less risk) by considering multiple LLMs rather than hitching their AI horse to one wagon.
Donald Rumsfeld famously spoke about "Unknown Unknowns" several years ago and recently I've read about the theory that separates risk from uncertainty, which reserves the word risk for conditions with known probability. Whatever you call it, AI clearly falls into the less knowable category.
Yep -- this is one way to do it, and which is why there's growig interest in uncertainty science. Yet we still need to have approaches that deal with consequences, which is why we ended up adopting a definition of risk as threat to value, along with the idea of orphan risks -- https://futureofbeinghuman.com/p/navigating-orphan-risks