“The problem is Sam Altman”: OpenAI Insiders don’t trust CEO
OpenAI brainstorms ways AI can benefit humanity in effort to counter bad vibes.
On the same day that OpenAI released policy recommendations to ensure that AI benefits humanity if superintelligence is ever achieved, The New Yorker dropped a massive investigation into whether CEO Sam Altman can be trusted to actually follow through on OpenAI's biggest promises.
Parsing the publications side by side can be disorienting.
On the one hand, OpenAI said it plans to push for policies to "keep people first" as AI starts "outperforming the smartest humans even when they are assisted by AI." To achieve this, the company vows to remain "clear-eyed" and transparent about risks, which it acknowledged includes monitoring for extreme scenarios like AI systems evading human control or governments deploying AI to undermine democracy. Without proper mitigation of such risks, "people will be harmed," OpenAI warned, before describing how the company could be trusted to advocate for a future where achieving superintelligence means a "higher quality of life for all."