Anthropic CFO Reports AI Writing Most Company Code
According to Business Insider, Anthropic CFO Krishna Rao said on Patrick O'Shaughnessy's "Invest Like the Best" podcast that the company's AI, Claude, now writes 90% of Anthropic's code and handles most finance work. Rao described AI as collapsing hours of office work into minutes and called it a productivity "accelerant," saying "We've hired a lot more people because of that." Business Insider reports Rao framed the employee role shift as moving from execution to oversight, judgment, and strategy. Editorial analysis: Industry practitioners should view this as a concrete example of large-model internal adoption and study how oversight, verification, and developer workflows adapt when models handle routine execution.
What happened
According to Business Insider, Anthropic CFO Krishna Rao told Patrick O'Shaughnessy's "Invest Like the Best" podcast that the company's AI, Claude, now writes 90% of Anthropic's code and handles most of its finance work. The interview quoted Rao saying AI is "collapsing hours of office work into minutes" and called the technology a productivity "accelerant." Rao also said, "We've hired a lot more people because of that," and described a shift in employee activity from execution toward oversight, judgment, and strategy.
Editorial analysis - technical context
Companies deploying large language models internally for code generation and business automation typically face higher demand for robust verification, test automation, and prompt engineering. Industry-pattern observations: integrating Claude-style models into engineering workflows usually requires investment in CI/CD hooks, model-output linting, and reproducible prompts rather than simply replacing developer tasks.
Industry context
Industry reporting has documented similar claims of productivity gains from internal LLMs at other AI and enterprise tech firms. For practitioners: the scale Rao cites, 90% of code generated by a model, is an outlier-level claim that, if replicated elsewhere, would shift skill mixes toward model orchestration, validation, and systems design rather than manual code authorship.
What to watch
Observers should track whether Anthropic or peers publish reproducible metrics about defect rates, review time, and deployment incidents after widescale code generation. Also watch for published tooling around model-assisted testing, automated formal checks, and role descriptions that quantify oversight tasks versus execution tasks.
Source attribution
All reported facts above come from Business Insider's coverage of Krishna Rao's appearance on the "Invest Like the Best" podcast, as quoted in the Business Insider article published May 13, 2026.
Scoring Rationale
A major AI company executive claiming that an internal model writes **90%** of code is notable for practitioners evaluating LLM adoption impact. The story signals significant operational changes but lacks independently verifiable metrics, placing it in the "notable" range.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

