Tuesday, 7 April 2026

CNCB News

International News Portal

OpenAI's newest fellowship includes up to $15,000 in AI compute a month

OpenAI's newest fellowship includes up to $15,000 in AI compute a month

The OpenAI program closely mirrors an existing Anthropic safety-focused fellowship.

Sam Altman speaks
OpenAI CEO Sam Altman
  • OpenAI has a new safety-focused fellowship program.
  • The announcement came hours after a report questioned CEO Sam Altman's commitment to AI safety.
  • OpenAI said fellows will receive roughly $15,000 worth of compute per month.

OpenAI is making compute resources a core part of its sales pitch for a new safety fellowship.

According to the application, OpenAI will dedicate approximately $15,000 in compute per month to its first AI safety fellows, who will work alongside members of the frontier lab's safety team.

Compute has long been a key barometer of cache for leading tech and AI companies. Recently, Nvidia CEO Jensen Huang said he would be "deeply alarmed" if an engineer earning $500,000 didn't use the equivalent of $250,000 in AI tokens.

The OpenAI fellowship runs from September 14, 2026, through February 5, 2027, and pays a weekly stipend of $3,850. Based on that amount, the yearly equivalent salary would be over $200,000 pretax, excluding likely holidays. In total, fellows will make over $111,000.

The announcement also came hours after The New Yorker published a lengthy expose questioning Altman's trustworthiness, based on interviews with more than 100 people who have direct experience with the OpenAI CEO, as well as never-before-published notes compiled by OpenAI cofounder Sutskever and now-Anthropic CEO Dario Amodei.

One key reasons some people question Altman's leadership is OpenAI's handling of safety-related issues. In one part of the story, journalists Ronan Farrow and Andrew Marantz recount how OpenAI disbanded a "superalignment team" that was supposed to investigate one of the most pressing issues facing AI: whether AI models could deceive testers only to pursue their own ends once actually deployed.

In the announcement, OpenAI said they want external researchers, engineers, and practitioners to pursue rigorous, high-impact research on the safety and alignment of advanced AI systems." The company also highlighted a handful of priority areas for fellows to focus on.

"Priority areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains, among others," the announcement said.

OpenAI's program closely mirrors rival Anthropic, which already has established a "Fellows Program for AI safety research." In December, Anthropic announced two new groups for May and July of 2026. OpenAI's benefits are also the same as the Anthropic program: a $3,850 weekly stipend and compute resources of roughly $15,000 per month.

"This year, we plan to work with more fellows across a wider range of safety research areas—including scalable oversight, adversarial robustness and AI control, model organisms, mechanistic interpretability, AI security, and model welfare," Anthropic said in a statement when it announced its next round of the fellowship in December.

Amodei cofounded alongside six former OpenAI employees after growing frustrated about the direction of the company. Anthropic recently weakened a core safety pledge, but Amodei and its top leadership have positioned the AI startup as intently focused on safety.

Other leading tech companies and AI labs, including Google DeepMind and Microsoft, offer broader-focused AI fellowships.

Read the original article on Business Insider