- Memorandum
- Posts
- AI’s Safety Brain Drain: Why The Guardians Are Walking Away
AI’s Safety Brain Drain: Why The Guardians Are Walking Away
A wave of high-profile resignations reveals mounting tension between commercial ambition and safety oversight at leading AI labs.
Welcome to Memorandum Deep Dives. In this series, we go beyond the headlines to examine the decisions shaping our digital future. 🗞️
This week, we’re diving into a growing fracture within the AI industry: why some of its most senior safety experts are leaving the very companies that build the world’s most powerful systems.
For years, AI labs positioned safety as their moral center of gravity—a competitive differentiator as much as a public commitment. Teams dedicated to alignment, safeguards, and policy were meant to ensure that increasingly capable systems would not outpace society’s ability to manage them. But as valuations soar into the hundreds of billions and IPO preparations intensify, the balance between principle and performance is shifting. What appears to be ordinary executive turnover is, in reality, more structural: a collision between the economics of hypergrowth and the constraints of responsible deployment.

Ship the message as fast as you think
Founders spend too much time drafting the same kinds of messages. Wispr Flow turns spoken thinking into final-draft writing so you can record investor updates, product briefs, and run-of-the-mill status notes by voice. Use saved snippets for recurring intros, insert calendar links by voice, and keep comms consistent across the team. It preserves your tone, fixes punctuation, and formats lists so you send confident messages fast. Works on Mac, Windows, and iPhone. Try Wispr Flow for founders.
*This is sponsored content. See our partnership options here.

From Declinism to Departure
For centuries, successive generations have assumed the world is on the verge of collapse. Modern psychologists and behavioral scientists often identify this assumption as a cognitive bias called declinism, the tendency to view the past as better and the future as worse.
The literature has repeatedly reinforced this perception, with writers across eras capturing generational anxieties in reflections on decline and crisis. Cultural traditions, especially those emphasizing the rise and fall of civilizations, have further entrenched this pessimistic outlook, making present-day fears feel historically familiar and almost inevitable.
Today, the idea that the world is in peril has acquired a technological dimension, egged on by the development of artificial intelligence.
Over the past week, the idea manifested in the public domain as a concentrated wave of safety-related departures in the AI industry’s short history.
The safety brain drain
In a short span of time, the tech world was rocked by the exodus of several high-profile names working on AI safety.
Anthropic’s head of Safeguards Research, Mrinank Sharma, resigned with a public warning about global risks. The Wall Street Journal reported that OpenAI had dismissed a VP of product policy following disagreements over a proposed “adult mode.”
OpenAI researcher Zoe Hitzig announced her resignation in a New York Times op-ed criticizing the company’s advertising direction. Platformer reported that OpenAI had disbanded its Mission Alignment team, while xAI saw additional co-founder exits, adding to a broader pattern of leadership turnover.
A concentrated week of departures
The departures are not routine Silicon Valley churn. They represent the sharpest collision yet between safety conviction and commercial momentum at the labs building the world’s most powerful AI systems. And it is happening precisely as OpenAI, Anthropic, and xAI all race toward public listings that could collectively value them at more than $1.5T.
When considered together, these departures illuminate a tension that has long shadowed the AI industry but is becoming increasingly difficult to ignore.
A pattern years in the making
The recent departures are also the latest chapter in a pattern that stretches back to May 2024, when OpenAI co-founder Ilya Sutskever and Superalignment lead Jan Leike resigned within hours of each other.
At the time, Leike’s public remarks made clear his concern that safety considerations were being deprioritized in favor of product development. Their departures were widely understood to be linked to the dissolution of the Superalignment team, which, despite having been allocated a significant share of OpenAI’s computing resources, was ultimately disbanded.
Since then, departures have continued with notable regularity, suggesting a longer-running pattern rather than isolated incidents.
OpenAI co-founder John Schulman moved to Anthropic, former CTO Mira Murati exited, economics researcher Tom Cunningham left after expressing discomfort with the direction of his team’s work, policy research head Miles Brundage resigned, citing constraints around publishing, and safety researcher William Saunders stepped away amid concerns about the company’s prioritization of product development.
Around the same time, Andrea Vallone, whose research examined how ChatGPT interacts with users experiencing mental health distress, also left Anthropic, adding to the broader movement of senior and specialized talent away from the company.
Why is this wave different?
The February 2026 wave stands apart for reasons that go beyond the number of exits. The turbulence is no longer confined to OpenAI, as Anthropic, a company founded by former OpenAI safety researchers and widely perceived as a safety-centric counterweight, is also seeing the departure of senior safety leadership.
This development raises uncomfortable questions about whether the underlying tensions are industry-wide rather than limited to any single organization. At the same time, the nature of these exits has become more visible and contentious, with disagreements and critiques surfacing in public rather than remaining internal, as evidenced by the firing of a product policy vice president and the publication of a resignation in a national newspaper.
The cumulative effect is a clear shift in intensity, in which what once resembled ordinary talent mobility now appears as a more pronounced pattern of friction around safety, governance, and strategic direction.
Meanwhile, xAI has experienced significant turnover, with roughly half of its founding team departing over the past year, even as the company navigates regulatory scrutiny related to the behavior and safeguards of its Grok chatbot.
The economic pressure behind the exits
This pattern of friction can be better understood through an economic lens. OpenAI is valued at approximately $500B, with a potential IPO in late 2026 or 2027. Anthropic is raising at a $350B valuation and has retained Wilson Sonsini to begin IPO preparations. Both companies face enormous pressure to demonstrate revenue growth.
OpenAI forecasts a $14B loss in 2026 despite generating roughly $20B in revenue. Its ChatGPT advertising test, charging a $60 CPM with a $200K minimum advertiser commitment, is designed to help close that gap. The adult mode feature, tapping into a sector valued at nearly $200B, represents another revenue lever. This is the core tension.
Those responsible for ensuring products are safe and ready for mass consumption are being overridden by those preparing for a public offering. And investors are beginning to notice the governance gap.

Ship Models That Improve in Production
If you're building applied AI, the hard part is rarely the first prototype. You need engineers who can design and deploy models that hold up in production, then keep improving them once they're live.
Python and PyTorch proficiency
Production deployment experience
40–60% cost savings
This is the kind of talent you get with Athyna Intelligence—vetted LATAM PhDs and Masters working in U.S.-aligned time zones.
*This is sponsored content

The governance gap
The exodus also underscores another idea that regulators worldwide should address: the problem may not be fixable within AI labs.
Anthropic was supposed to prove that a safety-first AI company could compete commercially without compromising its principles. Sharma’s departure suggests even that experiment has limits. If multi-hundred-billion-dollar valuations and pre-IPO pressures can erode a company’s internal safety culture, which was explicitly founded to resist such pressures, then the industry likely needs external infrastructure: independent testing bodies with contractual model access, mandatory incident reporting, and staffing-disclosure requirements.
The EU AI Act, the UK AI Safety Institute, and the U.S. NIST framework are all moving in this direction. Whether they can move fast enough is the open question.
What remains open is whether the AI industry’s internal safety apparatus is sufficient. The people who built it are telling us, loudly and publicly, on their way out the door, that it is not.

Public regulators are beginning to formalize AI oversight outside corporate control.
Beyond declinism
The idea of declinism is hardly new, but the present moment carries pressures that feel unusually immediate. The pace of AI development, the growing volume of expert warnings, and the steady departure of those responsible for safety and alignment collectively suggest that this generation may face the difficult task of managing risks that arrive faster than institutional safeguards can mature.
Public intellectuals such as Yuval Noah Harari have observed that the “collapse of social truth” is not a novel crisis but a recurring fragility in human societies. What distinguishes the current era is the scale and speed at which AI systems can produce and amplify misinformation, placing new strain on already vulnerable social trust structures. In such conditions, it is no longer sufficient to dwell on narratives of decline alone. The more urgent challenge is to cultivate collective resilience, strengthen governance, and reinforce accountability so that catastrophe remains a matter of human choice rather than an assumed technological destiny.
P.S. Want to collaborate?
Here are some ways.
Share today’s news with someone who would dig it. It really helps us to grow.
Let’s partner up. Looking for some ad inventory? Cool, we’ve got some.
Deeper integrations. If it’s longer-form storytelling you are after, reply to this email and we can get the ball rolling.

What did you think of today's memo? |




