Skip to content

JPMorgan Is Tracking Whether Its 65,000 Engineers Use AI. Their Reviews Depend on It.

DS
LDS Team
Let's Data Science
7 min
Internal documents show the largest bank in America has updated performance expectations for software and security engineers, making AI adoption a measurable requirement. A dashboard tracks individual GitHub Copilot usage. Engineers who lag behind risk seeing it reflected in their reviews.

Somewhere inside JPMorgan Chase, a dashboard displays the AI tool usage of every software engineer in the company. It tracks who uses GitHub Copilot, how often they use it, and whether they fall into the "light user," "heavy user," or "non-user" category. That dashboard feeds into a performance review system that, as of late March 2026, formally ties AI adoption to career outcomes for roughly 65,000 engineers and technologists.

"There's a lot of anxiety in the environment right now," one longtime JPMorgan developer told Business Insider.

The anxiety is not abstract. By the end of March, most developers in JPMorgan's Global Technology team received official performance goals related to using AI to boost productivity and code quality. The goals are not suggestions. Internal documents reviewed by Business Insider confirm the bank has formally updated performance expectations for software and security engineers, making AI adoption a measurable requirement rather than a discretionary tool.

Engineers who are not adopting AI tools could see it reflected in their reviews.

The Surveillance Architecture

The tracking system is more granular than a typical corporate technology mandate.

JPMorgan's internal systems classify engineers into usage tiers based on their interaction with AI coding tools, primarily GitHub Copilot. The classifications are updated regularly, and managers receive data on their teams' adoption rates. The system does not merely track whether an engineer has activated the tool. It tracks frequency, context, and patterns of use.

Employees are expected to use AI tools when writing code, reviewing documents, and handling routine tasks. The bank also encourages use of ChatGPT and is preparing a pilot rollout of Anthropic's Claude Code, expected to begin as early as April 2026. The shift in expectation is explicit: performance reviews now assess both "what you achieve" and "how you achieve it," with AI adoption falling squarely into the second category.

For a bank that employs more software engineers than most technology companies, the scale of this initiative is striking. JPMorgan plans to spend approximately $20 billion on technology in 2026. The AI adoption mandate is one piece of a broader strategy to ensure that investment translates into measurable productivity gains across the organization.

The Results That Justified the Mandate

JPMorgan did not arrive at this policy through theory. It arrived through data.

Lori Beer, JPMorgan's Global Chief Information Officer, disclosed that tens of thousands of the bank's software engineers increased their productivity by 10% to 20% after using an internal coding assistant tool. The gains were significant enough to shift the conversation from "should engineers use AI" to "why aren't all engineers using AI."

The bank's internal AI training program, called "AI Made Easy," has enrolled tens of thousands of employees. Derek Waldron, JPMorgan's Chief Analytics Officer, described the goal as educating the worldwide workforce on making AI work for every employee. Waldron told interviewers that "software engineers need to be upskilled to build scalable systems based on agents and LLM components."

That framing reveals the longer-term ambition. JPMorgan is not simply asking engineers to use autocomplete. It is restructuring its technology organization around the assumption that AI-augmented engineering is the baseline, and that engineers who cannot or will not work with AI tools are operating below the new standard.

Not All Productivity Is Equal

The policy creates a genuine tension for the engineers subject to it.

On one side, the productivity data is real. A 10% to 20% efficiency gain from a coding assistant is not trivial. For engineers working on large-scale systems, the difference between writing boilerplate manually and generating it with Copilot can free hours per week for higher-value work: architecture decisions, debugging complex systems, mentoring junior developers.

On the other side, the mandate treats AI tool usage as a proxy for productivity in ways that may not reflect actual engineering quality. An engineer who writes careful, well-tested code without Copilot may produce better outcomes than one who generates more lines with it. The dashboard tracks volume of AI interaction, not quality of output. A "heavy user" classification does not mean an engineer is producing better software. It means they are using the tool more.

The concern is not hypothetical. Multiple studies have examined whether AI coding assistants actually improve software quality, and the results are mixed. A widely discussed study covered by LDS earlier this year found that developers using AI assistants were actually 19% slower on certain tasks than those working without them, suggesting that the productivity gains from AI tools depend heavily on the task, the developer's experience level, and the quality of the underlying model.

JPMorgan's internal data appears to tell a different story, at least for the specific workflows the bank measures. But the tension between company-wide metrics and individual engineering judgment is real, and it is felt acutely by the engineers whose reviews now depend on the dashboard.

The Regulatory Dimension

Banking is not a typical technology environment. Every tool that touches code at JPMorgan operates under regulatory constraints that do not apply at a startup or even at most technology companies.

AI-generated code in a financial institution must meet compliance standards for accuracy, auditability, and security. ChatGPT and Claude can produce incorrect results. In a banking context, incorrect code does not just create bugs. It can create regulatory violations, financial losses, or security vulnerabilities that invite enforcement actions from the OCC, the Fed, or the SEC.

JPMorgan's AI mandate implicitly asks engineers to balance two competing demands: use AI tools enough to register as an adopter on the dashboard, and verify every line of AI-generated output thoroughly enough to meet banking-grade compliance standards. Those two goals can conflict, particularly under time pressure.

The bank has not publicly addressed how it plans to handle situations where AI-generated code introduces compliance issues. The performance review system tracks adoption. It does not, based on available reporting, track the downstream quality or compliance status of AI-assisted output.

The Signal for the Industry

JPMorgan is not the first company to push AI adoption internally. But it may be the first to formalize the connection between AI tool usage and individual performance reviews at this scale, with this level of granularity, in an industry this heavily regulated.

The move sends a clear signal to every engineer in the financial services industry: AI proficiency is no longer optional. It is a performance metric. For the 65,000 engineers at JPMorgan specifically, and for the hundreds of thousands more at competing banks watching this experiment, the message is that career advancement now requires demonstrating fluency with tools that did not exist three years ago.

For data scientists and ML engineers, the implication extends beyond coding assistants. If the largest bank in America treats AI tool adoption as a performance requirement for software engineers, similar mandates for data teams, covering tools like Claude Code and other agentic AI systems, are likely to follow across the financial sector.

The broader question is whether this model spreads beyond banking. Google, Meta, and Microsoft have all invested heavily in internal AI tooling, but none have publicly tied individual performance reviews to adoption metrics in the way JPMorgan has. If JPMorgan's productivity data holds, other large enterprises will have a template. If the policy generates backlash, attrition, or compliance incidents, it will become a cautionary example of moving too fast on a technology that is still maturing.

The Bottom Line

The Bottom Line

JPMorgan Chase has decided that AI adoption is not a choice. It is a job requirement. A dashboard tracks every engineer's usage. Performance reviews now formally include AI adoption as a measurable criterion. The bank's own data shows **10% to 20%** productivity gains from its internal coding assistant, and it is using that data to justify making the tools mandatory.

The policy is backed by $20 billion in technology spending and endorsed by the bank's most senior technology leadership. For the engineers living under it, the calculus is simple: learn the tools or risk falling behind in a system that is now explicitly tracking whether you do.

Whether that system measures the right thing is a question JPMorgan has not yet answered.

Sources

Practice with real Banking data

90 SQL & Python problems · 15 industry datasets

250 free problems · No credit card

See all Banking problems
Free Career Roadmaps8 PATHS

Step-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

Explore all career paths