US State Dept Alerts Global Diplomats on AI Theft
The U.S. State Department has ordered a global diplomatic push to warn foreign posts about alleged efforts by Chinese companies to extract and "distill" U.S. AI models, according to a diplomatic cable seen by Reuters. The cable names Chinese firms including DeepSeek, and mentions Moonshot AI and MiniMax, instructing diplomats to raise "concerns over adversaries' extraction and distillation of U.S. A.I. models," Reuters reports. The Chinese Embassy called the accusations "groundless," Reuters adds. Separate U.S. White House and OSTP materials have described "industrial-scale" distillation campaigns and cited evidence shared with U.S. AI companies, reporting by The Next Web shows. OpenAI previously flagged DeepSeek to U.S. lawmakers, according to Reuters and The Next Web.
What happened
The U.S. State Department sent a diplomatic cable dated April 24 instructing U.S. diplomatic and consular posts to raise concerns with foreign counterparts about "concerns over adversaries' extraction and distillation of U.S. A.I. models," Reuters reports. The cable, seen by Reuters, also notes that "a separate demarche request and message has been sent to Beijing for raising with China," Reuters adds. Reporting names Chinese companies including DeepSeek, and mentions Moonshot AI and MiniMax as firms of concern, per Reuters and India Today. The Chinese Embassy in Washington described the allegations as "groundless," Reuters reports. India Today reports that DeepSeek previewed a new model, V4, adapted for Huawei chip technology. The Next Web reports that U.S. White House and Office of Science and Technology Policy materials describe what they call "industrial-scale" distillation campaigns and say the government will share intelligence with U.S. AI firms.
Editorial analysis - technical context
Distillation, as described in reporting, is the technique of training smaller models on outputs from larger models by issuing large volumes of queries and using the returned responses as training data. Industry coverage notes the technique does not require stealing model weights or breaking into servers, but rather systematic extraction of behavior through APIs, The Next Web explains. Editorial analysis: Companies and legal commentators have highlighted that the intellectual property and contractual boundaries around using model outputs to build competing systems remain unsettled, which complicates enforcement and attribution.
Industry context
Reporting places the diplomatic action within a broader U.S. push that includes a White House memo, intelligence-sharing commitments, and proposed legislation, with The Next Web noting the timing ahead of a planned Trump-Xi summit. Editorial analysis: Observers following the sector will see this move as part of a pattern where tradecraft and diplomatic channels are used to address technology transfer concerns, rather than relying solely on commercial litigation or export controls.
What to watch
- •Whether the demarche to Beijing generates a formal response or follow-up action recorded by U.S. posts, as noted in the Reuters cable.
- •Progress on U.S. legislative measures such as the Deterring American AI Model Theft Act, referenced in reporting by The Next Web.
- •Public disclosures from named firms, and whether U.S. agencies publish technical indicators or legal guidance for distinguishing permissible model use from large-scale distillation, an issue highlighted across coverage.
Editorial analysis: For practitioners, the key operational implications will be increased scrutiny around large-scale automated querying, tightened terms of service enforcement, and the need for clearer contractual and technical guardrails when sharing API access or model outputs. Reporting does not include direct quotations from DeepSeek on the allegations beyond product previews, and the companies named have varying public responses in the coverage.
Scoring Rationale
This story combines diplomatic action, governmental policy signals, and allegations targeting frontier-model extraction, which materially affect model custodianship, API controls, and legal risk for AI practitioners.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

