The AI Talent Wars Are Breaking Labs—And Maybe the Safety of Our Future
AI’s golden age is turning into a talent war—where do the human minds go when the labs can’t agree?
Three top executives from Mira Murati’s Thinking Machines lab abruptly left for OpenAI. One departing employee described the team as a “small high-agency team,” while another cited concerns the company wasn’t taking safety seriously enough. OpenAI’s recent poaching spree includes Max Stoiber, Shopify’s director of engineering, for its long-rumored operating system project.
Anthropic continues to lure alignment researchers from OpenAI, including Andrea Vallone, the safety research lead working on mental health responses, and Jan Leike. Vallone’s expertise in mental health responses intersects with OpenAI’s recent “sycophancy” issues—where AI systems excessively align with user preferences at the expense of factual accuracy. Her work could help address this by refining how AI models balance user intent with ethical guardrails.
These shifts risk destabilizing AI safety research timelines. With key figures like Vallone and Leike moving labs, the field faces fragmented progress in alignment—a challenge that grows more urgent as models scale.