Grey-market proxies resell Claude API access at discount

A report by Oxford China Policy Lab researcher Zilan Qian, reported by Tom's Hardware, documents a grey-market economy of API proxy services in China reselling access to Anthropic's Claude models at prices reported as low as 10% of official rates. The report says these proxy networks, called "transfer stations" in Chinese developer communities, operate on platforms including GitHub, Taobao, and Telegram and sustain low pricing through stolen credentials, model substitution, and logging users' prompts and outputs for resale as AI training data. Tom's Hardware links Qian's findings to recent warnings from the White House and to Anthropic's prior disclosures of account misuse. Industry context: such resale and data-harvesting chains increase operational and privacy risks for API users and model providers.
What happened
A report by Oxford China Policy Lab researcher Zilan Qian, as covered by Tom's Hardware, finds a grey-market ecosystem of API proxy services in China reselling access to Anthropic's Claude models at prices reported as low as 10% of official rates. The report says these proxy operations, known locally as "transfer stations," are visible on platforms including GitHub, Taobao, and Telegram and implement a modular supply chain where participants specialise in one or two links. Qian's reporting describes tactics including bulk account registration by farming free API credits, exploiting corporate discounts, subdividing $200 Max subscriptions across many users, and introducing accounts purchased with stolen credit cards. The article notes the proxies sustain margins by quietly substituting different models and logging both user prompts and model outputs for resale as training data.
Technical details
Editorial analysis - technical context: The practices described create two technical risks commonly discussed in security and ML operations. First, model-substitution and proxying break the trust boundary between an API client and an advertised model, producing unpredictability in latency, correctness, and the provenance of outputs. Second, systematic collection of prompts and outputs by intermediaries raises data-leakage and dataset-contamination concerns, since those interactions can be ingested into downstream training pipelines without user consent.
Context and significance
Industry context
Tom's Hardware links Qian's findings to recent public warnings: the White House alleged in late April that Chinese entities have run "industrial-scale" distillation campaigns against frontier models, and Tom's Hardware reports that Anthropic previously disclosed account misuse tied to groups including DeepSeek, Moonshot AI, and MiniMax. For practitioners, these reports underscore operational exposure when routing traffic through third-party proxies and the legal and compliance risks of unintentionally contributing proprietary prompts or outputs to external training sets.
What to watch
What to watch
indicators include takedowns or policy enforcement by hosting platforms where "transfer stations" advertise, vendor disclosures of unusual API call patterns or account aggregations, and changes to API pricing, rate limits, or credential-issuance controls intended to curb bulk account farming. Observers should also track any public incident reports from enterprise users whose prompts may have been captured and reused.
Scoring Rationale
The reported resale and data-harvesting practices pose operational, privacy, and intellectual-property risks for API users and model providers. The story affects practitioners responsible for API security, data governance, and vendor risk, making it a notable security-risk item.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems
