AI News: OpenAI Is Paying Researchers to Find What Could Go Wrong — But Its Internal Safety Teams Are Already Gone

The AI news out of OpenAI this week has a sharp edge: the company launched a paid Safety Fellowship offering $3,850 weekly stipends to external researchers studying what could go wrong with advanced AI — announced within hours of a New Yorker investigation reporting that OpenAI had dissolved its internal safety teams and quietly removed the word “safely” from its IRS mission statement.
OpenAI announced the fellowship on April 6 as “a pilot program to support independent safety and alignment research and develop the next generation of talent.” The program pays $3,850 per week, over $200,000 annualized, plus roughly $15,000 in monthly compute and mentorship from OpenAI researchers. Fellows work from Constellation’s Berkeley workspace or remotely, and applications close May 3. The fellowship is not limited to AI specialists — OpenAI is recruiting from cybersecurity, social science, and human-computer interaction alongside computer science.
AI News: OpenAI Launches Safety Fellowship as Its Internal Safety Teams Are Gone
The timing is the story. Ronan Farrow’s investigation in The New Yorker, published the same day, documented that OpenAI had dissolved three consecutive internal safety organizations over 22 months. The superalignment team was shut down in May 2024 after co-leads Ilya Sutskever and Jan Leike departed. Leike wrote on his way out that “safety culture and processes have taken a backseat to shiny products.” The AGI Readiness team followed in October 2024. The Mission Alignment team was disbanded in February 2026 after just 16 months. The New Yorker also reported that when a journalist asked to speak with OpenAI’s existential safety researchers, a company representative replied: “What do you mean by existential safety? That’s not, like, a thing.”
The fellowship explicitly does not replace internal infrastructure. Fellows receive API credits and compute resources but no system access, positioning the program as arm’s-length research funding rather than a rebuild of the dissolved teams.
What the Fellowship Requires Fellows to Produce
The research agenda spans seven priority areas: safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. By the program’s end in February 2027, each fellow must produce a substantive output — a paper, benchmark, or dataset. Specific academic credentials are not required; OpenAI stated it prioritizes research ability, technical judgment, and execution capacity.
Why This Matters Beyond the AI Industry
As crypto.news has reported, confidence in frontier AI companies’ stated safety commitments is a market signal that affects capital allocation across AI infrastructure, AI tokens, and the DePIN and AI agent protocols sitting at the intersection of crypto and artificial intelligence. As crypto.news has noted, OpenAI’s spending trajectory and the credibility of its operational priorities are tracked closely by investors evaluating the AI infrastructure sector — a sector with growing overlap with blockchain-based systems. Whether external fellows working without internal access can meaningfully influence model development is a question the first cohort’s research will begin to answer in early 2027.
You may also like
Archives
- May 2026
- April 2026
- March 2026
- February 2026
- January 2026
- December 2025
- November 2025
- October 2025
- September 2025
- August 2025
- July 2025
- June 2025
- May 2025
- April 2025
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- January 2024
- December 2023
- January 2023
- December 2022
- January 2022
- December 2021
- January 2021
- December 2020
- December 2019