Mercor Faces Five Lawsuits After Data Breach
Mercor, a $10 billion AI training-data firm, is facing five contractor lawsuits within a week after a supply-chain–linked security breach. The incident traces to malicious code inserted into the open-source LiteLLM library; attackers linked to TeamPCP (and later claimed by Lapsus$) harvested credentials and may have exposed contractor and customer datasets. Mercor says it was “one of thousands” affected, has contained the incident, engaged third-party forensics, and is communicating with stakeholders. Lawsuits allege violations of data privacy and consumer-protection laws and signal immediate legal and contractual fallout for a firm that supplies training data to major AI labs including Anthropic and OpenAI.
What happened
Mercor, a three-year-old startup that supplies training data to major AI labs and is valued at about $10 billion, is facing five contractor-filed lawsuits in a single week after a security incident tied to a supply-chain compromise. The breach is linked to malicious code planted in LiteLLM, a popular open-source library used to connect applications to AI services. Security researchers tie the insertion to a hacking group known as TeamPCP; Lapsus$ later claimed it accessed Mercor data.
Technical context
The attack exploited the software supply chain — a high-impact vector for AI infrastructure because compromised dependencies can silently harvest credentials and propagate across dozens or thousands of downstream systems. Fortune’s reporting describes malicious LiteLLM code designed to exfiltrate credentials and spread before discovery. Supply-chain attacks against client libraries are especially dangerous for firms like Mercor that centralize sensitive training datasets and contractor identity information.
Key confirmed details
- •Mercor confirmed it was “one of thousands of companies” affected and says it moved promptly to contain and remediate the incident and hired third-party forensic investigators. (Heidi Hagberg, Mercor spokesperson)
- •Fortune identifies customers including Anthropic and OpenAI and notes Mercor’s recent VC momentum, with a funding round led by Felicis Ventures last October.
- •The attack has produced immediate downstream consequences: multiple contractors have filed lawsuits alleging negligent cybersecurity practices and violations of data privacy and consumer-protection statutes.
Why practitioners should care
This is a live example of how supply-chain compromises in AI tooling can cascade into data exposures, contractual suspensions, and litigation. For ML engineers and data teams, the incident underscores the need for hardened dependency vetting, artifact reproducibility, credential hygiene, and segmentation between training-data stores and development environments. Legal risk is material for vendors handling contractor PII and proprietary datasets — insurance, incident response playbooks, and vendor SLAs will be forced into sharper focus.
What to watch
Monitor forensic findings for scope of dataset exposure, whether proprietary customer projects were accessed, and attribution confirmations connecting TeamPCP and Lapsus$. Watch for customer contract suspensions or indemnity disputes, regulatory inquiries, and broader ecosystem remediation of LiteLLM and similar libraries.
Scoring Rationale
A supply-chain attack affecting a major training-data provider poses significant technical and legal risks to practitioners and customers across the AI stack. The story materially affects vendor risk, dependency management, and data governance, though it is not yet an industry-defining platform shift.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems
