IBL News | New York
Jan Leike, the leading researcher who resigned from OpenAI earlier this month before publicly criticizing the company’s approach to AI safety, has been hired by Anthropic.
Leike was co-leading OpenAI’s Superalignment team, which intended to solve the core technical challenges of controlling superintelligent AI in the next four years,
OpenAI’s leadership, headed by Sam Altman, recently dissolved this team.
In a post on X, Jan Leike said that his team at Anthropic will focus on various aspects of AI safety and security and automated alignment research.
I’m excited to join @AnthropicAI to continue the superalignment mission!
My new team will work on scalable oversight, weak-to-strong generalization, and automated alignment research.
If you’re interested in joining, my dms are open.
— Jan Leike (@janleike) May 28, 2024
Anthropic has often attempted to position itself as more safety-focused than its rival company, OpenAI.
Its researchers are currently working on techniques to control large-scale AI behavior in predictable and desirable ways.
Years ago, Anthropic’s CEO, Dario Amodei, who was once the VP of research at OpenAI, split with OpenAI after a disagreement over the company’s direction — namely OpenAI’s growing commercial focus.
Amodei launched Anthropic with some ex-OpenAI employees, including OpenAI’s former policy lead, Jack Clark.
Meanwhile, OpenAI has formed a new committee to oversee “critical” safety and security decisions related to the company’s projects and operations.
OpenAI decided to staff the committee with company insiders — including Sam Altman, OpenAI’s CEO — rather than outside observers.