Google Reports 75% of New Code Is AI-Generated
Google now generates 75% of its new code with AI, using Gemini models and internal agents while human engineers perform review. CEO Sundar Pichai said agent-assisted workflows completed a complex code migration six times faster than a year ago. The company is pushing engineers to adopt coding assistants, factoring some AI adoption goals into performance reviews, and allowing select teams to use third-party tools such as Claude Code. Engineers continue to review and ship AI-generated code, making this an operational shift rather than a replacement. The move highlights productivity gains, governance and quality-control challenges, and a larger industry pivot toward agentic developer tooling.
What happened
Google says 75% of the company's new code is now generated by AI and subsequently reviewed by human engineers. CEO Sundar Pichai reported that agent-assisted workflows and Gemini models helped complete a complex code migration six times faster than similar work a year earlier. The company is formalizing adoption by pushing staff to use coding assistants and by factoring AI usage goals into performance reviews. Select DeepMind teams have access to Anthropic's Claude Code for specific tasks.
Technical details
Google developers are using Gemini family models and internal agents to generate, refactor, and migrate code. Engineers retain responsibility for review and integration, but the workflow is shifting toward more autonomous, agentic tasks orchestrating multiple steps. Key operational points:
- •Increased use of agent workflows for multi-step transformations and migrations
- •AI-generated code reviewed by engineers before merging
- •Performance-review incentives tied to AI tool adoption and productivity
Context and significance
This is a major internal scale-up of developer tooling at one of the largest engineering organizations. Adopting Gemini and agentic pipelines at scale reduces developer cycles, accelerates migrations, and creates standard templates for routine work. The combination of models plus agents marks a shift from single-shot generation toward automated pipelines that chain generation, testing, and deployment tasks. Allowing Claude Code for some teams also signals pragmatic multi-vendor strategy rather than lock-in to a single model.
Practical implications for practitioners
Expect increasing emphasis on toolchain integration, test automation, and guardrails. Review practices must evolve: more focus on security scanning, behavioral tests, and reproducible pipelines to catch subtle bugs introduced by model outputs. Measuring true productivity will require metrics beyond merge frequency, such as defect rates, lead time, and post-deploy incidents.
What to watch
Monitor how Google implements guardrails and testing for agentic workflows, how performance-review incentives affect engineering behavior, and whether other large engineering orgs replicate the multi-model, agent-first approach. The balance between speed gains and long-term code quality will determine whether this becomes an industry standard or a cautionary example.
Scoring Rationale
Google's large-scale internal adoption of AI for code is a notable operational shift with broad influence on developer tooling and productivity. It is not a new model release, but the scale and formalization (performance reviews, agents) make it highly relevant for practitioners.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


