Andrea Vallone Joins Anthropic Safety Team

Andrea Vallone, who led OpenAI’s model policy research for three years, has joined Anthropic’s alignment team, she announced in LinkedIn posts this week. At OpenAI she worked on deploying GPT-4 and GPT-5 and developing safety training methods such as rule-based rewards; at Anthropic she will work under Jan Leike to study how models should respond to users showing signs of emotional over-reliance or mental-health distress.
Scoring Rationale
Notable personnel shift with clear safety implications; however limited technical detail reduces broader immediate impact.
Practice with real Health & Insurance data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Health & Insurance problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

