AI model copying has pushed three of Silicon Valley's biggest artificial intelligence players into an unusual alliance. AI model copying is now seen by them not just as a commercial headache, but as a strategic threat.
Anthropic, Google and OpenAI have begun quietly sharing intelligence to detect and block attempts by Chinese firms to copy their most advanced systems, according to people familiar with the effort.
The companies are using the Frontier Model Forum, an industry non-profit they founded with Microsoft in 2023, as the channel for alerting one another to suspicious activity and so-called adversarial distillation attacks that breach their terms of use.
In practice, that means monitoring for huge volumes of automated queries, unusual access patterns and obfuscated routing through third parties that could indicate a rival is harvesting outputs to train its own models.
Executives say this kind of copying lets competitors undercut US labs on price, potentially costing them billions of dollars a year in lost revenue, though those estimates have not been independently verified.
In AI research, model distillation is a recognised technique: a smaller "student" model learns by imitating the responses of a larger "teacher" system, often making deployment cheaper and more efficient when done with consent.
OpenAI and Anthropic argue that some Chinese developers have crossed a line by using paid or free access to US services at industrial scale, generating vast numbers of responses and then using that data to train competing chatbots.
In a memo to US lawmakers earlier this year, OpenAI accused Chinese startup DeepSeek of "ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs", relying on distillation to build its R1 model.
Anthropic has separately warned that Chinese labs including DeepSeek, Moonshot and MiniMax have tried to extract the capabilities of its Claude systems, and says some distilled models strip out safety guardrails that were designed to prevent abuse.
While Chinese companies have not publicly accepted these allegations, US firms are now framing the issue as more than a business dispute.
They argue that powerful foundation models distilled without proper constraints could be repurposed for cyber attacks, disinformation campaigns or other hostile uses, and have told Congress the threat "extends beyond any single company or region".
The joint stance by Anthropic, Google and OpenAI marks a rare moment of solidarity in an intensely competitive field.
It suggests that as AI model copying accelerates, leading US developers are prepared to cooperate more closely with each other and with governments to defend both their technology and what they see as wider economic and security interests.

