Anthropic Says Chinese AI Labs ‘Mined’ Claude as Washington Spars Over Chip Exports

Anthropic has accused three prominent Chinese AI companies of systematically “mining” its Claude chatbot to clone its most advanced capabilities, thrusting questions of AI theft, security and geopolitics into the center of an already heated U.S. debate over exporting cutting‑edge AI chips to China.

Chinese labs accused of “industrial‑scale” Claude mining

In a detailed disclosure, San Francisco–based Anthropic said Chinese AI labs DeepSeek, Moonshot AI and MiniMax collectively created more than 24,000 fake user accounts to bombard Claude with over 16 million prompts and responses. The company alleges the firms used a technique known as “distillation” to extract and replicate Claude’s strengths in areas like advanced reasoning, tool use and coding, with the goal of training their own rival models.

Anthropic says the activity unfolded over months and was designed to look like normal user behavior, but large‑scale statistical analysis revealed tightly patterned prompts, unusual volumes and coordinated behavior across thousands of accounts that did not match organic usage. The company is calling the operation “industrial‑scale model theft,” arguing that the exchanges effectively siphoned off capabilities that cost U.S. firms years of research and billions of dollars to develop.​

How the alleged distillation attacks worked

Distillation is a routine machine‑learning technique in which developers use one model’s outputs as training data to build another, often smaller or cheaper, system. In Anthropic’s telling, the Chinese labs crossed a line by turning Claude into a high‑value target for systematic extraction rather than legitimate use.

According to Anthropic’s breakdown, DeepSeek generated more than 150,000 tightly focused exchanges aimed at Claude’s core reasoning, safety alignment and censorship‑safe ways of handling sensitive policy topics. Moonshot AI is said to have orchestrated about 3.4 million interactions targeting multi‑step “agentic” reasoning, tool use, data analysis and early work on computer‑use agents and vision. MiniMax allegedly ran the largest campaign, with roughly 13 million interactions targeting coding, orchestration and broad tool‑use behaviors; Anthropic claims it even observed MiniMax traffic being redirected en masse to Claude when a new version of the model launched.​

Anthropic says it detected the campaigns through behavioral “fingerprints,” such as systematically varied prompts designed to elicit step‑by‑step reasoning, highly uniform query patterns and usage profiles that looked more like capability mapping than real‑world problem solving. The company says it has since shut down the fraudulent accounts and is investing in stronger defenses to make such large‑scale distillation attacks harder to carry out and easier to spot, while urging cloud providers, rival labs and policymakers to join a coordinated response.​

National security and AI safety concerns

Beyond the commercial implications, Anthropic is framing the incident as a national‑security and AI‑safety risk. The company argues that U.S. labs build systems with explicit safeguards to prevent misuse in areas like bioweapons development and offensive cyber operations, but models created through illicit distillation are unlikely to preserve those guardrails. In a blog post, Anthropic warned that dangerous capabilities could spread globally “with many protections stripped out entirely,” especially if derived models are later open‑sourced or deployed by authoritarian governments.

Those concerns echo a separate warning from OpenAI, which recently sent a memo to the U.S. House Select Committee on the Chinese Communist Party alleging that DeepSeek had used distillation techniques and obfuscated access methods to “free‑ride on the capabilities developed by OpenAI and other U.S. frontier labs.” OpenAI told lawmakers that employees tied to DeepSeek had sought to bypass access restrictions and route traffic through third‑party services to hide its origin.

Security experts say Anthropic’s allegations make concrete what many in Washington have suspected for months. Dmitri Alperovitch, chair of the Silverado Policy Accelerator and co‑founder of CrowdStrike, told TechCrunch that it has long been “clear” that part of the rapid progress of Chinese AI models stems from distillation‑based copying of U.S. frontier systems, and that the new evidence strengthens calls to block advanced chip sales to implicated firms.

U.S. chip export fight takes on new urgency

The revelations land as the Trump administration loosens some export controls on advanced AI chips to China, a policy reversal that has already split lawmakers and the national‑security community. In January, the U.S. Commerce Department shifted from a “presumption of denial” to a case‑by‑case review for exports of Nvidia’s H200 and comparable AMD processors, effectively green‑lighting sales of powerful AI accelerators to Chinese customers under certain conditions.

Supporters of the change argue that allowing controlled exports keeps U.S. chipmakers central to the global AI ecosystem, generates revenue and preserves visibility into Chinese AI development. Critics counter that H200‑class chips, which are significantly more powerful than any domestically produced Chinese alternative, could close the compute gap between U.S. and Chinese labs and help fuel both direct model training and large‑scale distillation of American systems.

Anthropic explicitly links the alleged Claude extraction campaigns to access to high‑end hardware, arguing that the sheer scale of 16‑plus million interactions “requires access to advanced chips.” In its blog, the company says the attacks “reinforce the rationale for export controls,” claiming that restricting China’s access to such hardware can limit both frontier‑model training and illicit distillation campaigns that piggyback on U.S. APIs.

What happens next

Anthropic has not announced any lawsuits but has signaled it will continue working with policymakers and industry peers to define red lines around model distillation and strengthen platform‑level defenses. The company is also pressing for clearer rules on how API providers can detect, block and share information about suspected extraction campaigns without running afoul of privacy or competition concerns.​

Chinese companies DeepSeek, Moonshot AI and MiniMax have not publicly responded in detail to the latest allegations, though Chinese AI firms have previously argued that distillation and benchmarking are standard practices in a highly competitive field. With Washington already debating how far to go in restricting China’s access to U.S. chips and AI services, Anthropic’s disclosure is likely to become a fresh talking point for lawmakers pushing for tighter controls and new legal tools to treat large‑scale distillation as a form of economic and technological espionage.