Indeed CIO Rejects Tokenmaxxing Leaderboard for AI Use
Indeed is tracking employee AI usage but will avoid explicit leaderboards that reward raw token consumption. CIO Anthony Moisant says the company monitors token volume behind the scenes for cost and security telemetry, but ties incentives to outcome-focused metrics rather than usage counts. Moisant warns that visible leaderboards create perverse incentives, encouraging employees to maximize token use rather than improve hiring outcomes. Indeed plans to prioritize metrics linked to business impact, privacy, and reproducibility while keeping usage telemetry internal and governance-driven.
What happened
Indeed's CIO, Anthony Moisant, publicly rejected creating a "Tokenmaxxing"-style leaderboard and said the company will stay "far, far away" from gamifying raw AI token consumption. He confirmed Indeed monitors token use for cost, security, and operational telemetry, but emphasized those signals remain in the background and are not tied to employee incentives.
Technical details
Moisant described the core rationale as avoiding perverse incentives that arise when simple, easy-to-track metrics become reward mechanisms. He wants metrics that are closer to outcomes, such as candidate placement rates, time-to-hire improvements, and quality-of-hire signals, rather than tokens consumed or API calls made. Practitioners should note three operational levers Indeed is implicitly prioritizing:
- •telemetry that captures cost, latency, and error rates rather than public leaderboards
- •outcome metrics aligned to product KPIs, for example time-to-hire or placement conversion
- •privacy and reproducibility controls to avoid exposing sensitive prompts or PII
- •governance rules that separate usage monitoring from compensation or peer ranking
Context and significance
The pushback responds to a broader trend where companies like Meta experimented with leaderboards or internal competitions that reward heavy model usage, a practice sometimes called "Tokenmaxxing." That tactic can increase cloud spend and shift employee behavior toward maximizing calls to LLMs instead of solving the business problem. Indeed's stance is a practical example of an organization treating AI telemetry as an engineering and governance input, not a productivity scoreboard. For ML teams, this reinforces a migration from raw usage metrics to signal-derived KPIs and counterfactual checks that validate whether model usage improves downstream outcomes.
What to watch
Expect technical follow-through in the form of dashboards that map model interactions to hiring outcomes, anonymized usage pipelines, and guardrails that disconnect personal incentives from raw API volumes. The operational tradeoffs to monitor are balancing cost visibility with non-incentivized telemetry and building reliable outcome attribution for ML-in-the-loop workflows.
Scoring Rationale
Notable corporate stance on AI governance that affects telemetry and incentive design for ML teams. It is relevant to practitioners designing monitoring and KPI systems, but it is not a technical breakthrough.
Practice with real Logistics & Shipping data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Logistics & Shipping problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



