China’s AI Is Spreading Fast. Here’s How to Stop the Security Risks #AI


In late 2024, Chinese models accounted for one percent of global AI workloads. By the end of 2025, that figure had surged to 30 percent. Alibaba’s Qwen family now boasts over 700 million downloads, making it the world’s largest provider of “open-source” AI systems that are publicly released and capable of being downloaded and run locally. A constellation of Chinese AI labs — DeepSeek, Moonshot, and MiniMax chief among them — are increasingly popular fixtures of a global, open-source marketplace, which is starting to power everything from Indian academic research to America’s most elite technology startups.

Though they are open-weight and free to use, Chinese-origin AI models are still developed by companies that are subject to the country’s National Intelligence Law and liable to “support, assist, and cooperate” with the Chinese government’s national security investigations and intelligence collection activity. For U.S. policymakers, the risks associated with widespread adoption of Chinese models are sure to eclipse those of TikTok. Users are not uploading videos of themselves dancing, but soliciting feedback on proprietary code, business strategies, and sensitive communications — fragments of which are deposited directly into systems accessible to China’s security services.

The rapid integration of Chinese AI systems into U.S. national and global infrastructure poses four distinct baskets of possible threats to U.S. national security: supply chain poisoning, intelligence collection, capability uplift for malicious actors, and economic displacement — each requiring targeted interventions that avoid replicating China’s own protectionist playbook.

Supply Chain Poisoning and Indirect Control

The first problem is not about China, but about AI as a technology: It is incredibly difficult to audit the global supply chain for AI software. This is both because the technology is a “black box” — today’s models regularly exceed tens of billions of parameters in size, which cannot be easily inspected for anomalies — and because internet anonymity protects software developers.

Recent War on the Rocks commentary has outlined the difficulties associated with auditing generated code: Research from Anthropic and the UK’s AI Safety Institute has demonstrated that as few as 250 poisoned documents can successfully establish a backdoor in a mid-sized (13 billion parameter) language model. These backdoors are encoded in the model’s statistical weights. Without knowing what they are looking for, security teams find them extraordinarily difficult, if not impossible to detect during a conventional code review or security test. They can be activated by triggering highly specific inputs, meaning that a compromised model can pass leading safety benchmarks while still harboring exploitable vulnerabilities that make it liable to sabotage or jailbreaking.

This is not a theoretical problem but a well-established feature of today’s open weight model landscape. As early as April 2025, security researchers at Protect AI had identified over 352,000 suspicious files across 51,700 models on the world’s largest platform for hosting and downloading AI models, Hugging Face. More than 15 percent of enterprise AI projects today rely on open-source models from public repositories. When organizations choose to build on top of Chinese base models without independent verification, they are inheriting whatever vulnerabilities — or deliberately engineered backdoors — those models might contain. This attack surface is expanding at incredible speed as the number of small AI models skyrockets.

Even if the detection problem could be solved, there is the additional question of who should be responsible for fixing it. No regulatory framework today assigns responsibility or liability to actors who poison models, nor to the platforms that distribute them. In July 2025, Pillar Security disclosed a novel attack vector affecting 1.5 million commonly used model files, which allowed attackers to embed malicious instructions invisible to most users and security tools. When Pillar Security reported the vulnerability to HuggingFace, the platform declined to classify it as a security issue. With this decision, HuggingFace effectively declared that verifying the integrity of models it distributes is someone else’s problem.

As supply chain risks crystallize, regulators will grapple with the question of who should be held responsible when things go awry. One solution could be for the Commerce Department’s Bureau of Industry and Security to designate AI model repositories as part of the Information and Communications Technology and Services supply chain and issue binding security requirements under Executive Order 13873, including provenance documentation and automated scanning for known poisoning signatures. The National Institute of Standards and Technology is currently fast-tracking a standardized testing protocol for model integrity. It could likewise offer an Underwriters Laboratories certification for AI weights to provide enterprises with some baseline degree of confidence as they incorporate open source software.

For its part, Congress should begin the difficult work of extending existing software liability frameworks to cover the distribution of AI models. It is not clear whether platforms like Hugging Face should bear some degree of legal responsibility for conducting basic integrity checks on the models they serve to American users. Cloud service providers and app stores conduct various degrees of screening on the software they distribute. The precise contours of this responsibility — including what constitutes adequate due diligence and how to divide liability between platform and developer — will require the same careful calibration that has defined the Section 230 debate for traditional internet platforms. But the status quo, in which no one bears responsibility for harms that might accrue from hosting or publishing open weight models, is untenable.

Policymakers should also understand that even a robust liability regime will not be sufficient to deter every harmful deployment of an AI system. The most rigorous security reviews will still lag behind novel poisoning techniques, and no certification can guarantee the absence of a backdoor in even a mid-sized model. Even if the United States were to impose some liability regime on model hosts, the most sophisticated threat actors will migrate to platforms beyond U.S. regulatory reach — and every compliance layer risks imposing friction that pushes developers toward less regulated alternatives, potentially including Chinese-hosted alternatives with no screening at all.

The goal should not be to make the supply chain impervious, but to raise the cost of poisoning enough to deter casual threat actors, and to ensure that when a sophisticated attack does succeed, there is some baseline legal framework for accountability rather than a collective shrug.

Data Exfiltration and Intelligence Collection

A second problem with Chinese AI models will be familiar to anyone who has followed the national debate over TikTok: The applications route sensitive user data through servers located in China. While most users of China’s open weight models are downloading and running them locally on computers in their custody, many others are not. This can present a direct intelligence collection opportunity for Beijing.

Under China’s 2017 National Intelligence Law, companies “must support, assist, and cooperate with state intelligence work.” Users sharing contracts, code, and strategic documents with these systems are, in effect, depositing them into a Chinese government-accessible database. Moreover, even when users are not directly interfacing with a Chinese web chatbot, several U.S. developers are integrating DeepSeek or Qwen into their own applications by sending programmatic requests to the model provider’s servers in China. Each call to an application programming interface (API) transmits a context window containing the user’s query, relevant background data, and often prior conversation history — up to 100,000 words per request for DeepSeek’s V3 model. A startup using DeepSeek’s API to build an internal coding assistant is transmitting snippets of its proprietary codebase with every API call.

To address mounting risks to American data security, some policymakers may be tempted to ban the adoption of Chinese AI models outright — but this would be unenforceable in practice and deeply damaging to the U.S. developer ecosystem. DeepSeek and Qwen are available for download from dozens of mirrors and third-party platforms. API access can be routed through intermediaries, and any prohibition broad enough to cover the full spectrum of Chinese-origin AI services would risk sweeping up thousands of derivative models fine-tuned by developers with no connection to Chinese security services.

A more productive approach would require transparency about where Chinese models are being deployed. If it is important for a given industry or use case, U.S. customers of AI systems should be able to know where base models were trained and where their data is being transmitted. For example, the Federal Trade Commission could require any AI service provider operating in the United States — or serving American users via API — to disclose where user data is processed, stored, and accessible. This form of regulation has ample precedent in food nutrition labels. Users and enterprises could then make informed decisions about whether to route sensitive queries through servers subject to China’s National Intelligence Law.

For federal contractors and entities handling controlled unclassified information, the security standard should be higher: The Commerce Department should prohibit the use of AI models that process data on servers located in, or accessible to, jurisdictions designated as foreign adversaries. This would cover not only direct use of DeepSeek’s API, but also any third-party application that routes queries to China-hosted cloud infrastructure — a common and growing practice among startups seeking to minimize inference costs.

Of course, disclosure requirements only work if customers care enough to read the label and change their behavior accordingly. Even with mandatory disclosure, most price-sensitive individual users will continue using China’s open weight models — as will startups optimizing for speed and focused on reducing burn rates.

While transparency is a necessary condition for informed choice, it is not a sufficient condition for security. It will catch the negligent, not the indifferent — and it will do nothing about the millions of global users outside U.S. jurisdiction who are building on Chinese AI infrastructure without being subject to any disclosure requirement at all.

Capability Uplift for Malicious Actors

A third problem is that open weight models developed by Chinese AI labs exhibit systematically weaker safety guardrails than their American counterparts. This creates two distinct issues.

First, while it is true that Beijing requires Chinese AI models to censor outputs deemed threats to the country’s “social stability,” they are still far more permissive than their American counterparts in responding to queries that could cause direct risks to national security. When a user asks Anthropic’s Claude or OpenAI’s ChatGPT to help synthesize a controlled substance or develop a phishing toolkit, the model will almost always refuse. Asking DeepSeek the same question will often result in the model complying. Unless Chinese AI labs implement meaningful guardrails, their products provide a meaningful capability boost to malicious actors seeking to develop cyber weapons, chemical agents, or biological pathogens.

Second, even when Chinese AI models are programmed to resist producing some harmful output, they are still far easier to jailbreak than their American counterparts, often failing basic red team evaluations. The National Institute of Standards and Technology’s Center for AI Standards and Innovation tested DeepSeek’s R1 model and found it complied with 94 percent of overtly malicious requests that used common jailbreaking techniques, while comparable U.S. frontier models complied with just 8 percent. In early 2025, Cisco independently validated these findings, reporting a 100 percent attack success rate against DeepSeek R1, with the model “failing to block a single harmful prompt.”

The deliberate misuse of AI models is already a serious problem and is poised to grow as models become capable of ingesting and iterating on whole codebases. Google Threat Intelligence has identified malware strains that query Qwen models for real-time code generation during active intrusions. AI-assisted cyberattacks have increased 72 percent since 2024. The FBI’s Internet Crime Complaint Center likewise logged a 37 percent rise in AI-assisted business email compromise in 2025. For threat actors who previously lacked the technical sophistication to develop custom exploits, Chinese open weight models serve as always available tutors with essentially no content restrictions.

As with data exfiltration, the United States government cannot ban its way out of this problem. DeepSeek and Qwen are downloadable from dozens of mirrors and third-party platforms. Any prohibition broad enough to matter would be unenforceable in practice and risks sweeping in derivative models with no connection to China’s security services.

What the United States can do is establish minimum safety standards for AI models distributed through American infrastructure. For example, Commerce Department authorities leveraged against Chinese-origin connected vehicles could be extended to AI models served through U.S. cloud platforms. To limit the adoption of Chinese models within the American market, the Commerce Department could rule that any model distributed via U.S. cloud providers should be required to meet minimum thresholds on standardized safety evaluations, such as the jailbreak benchmarks developed by the Center for AI Standards and Innovation — implicitly excluding many insecure, Chinese-origin AI models.

Managing the risks from capability uplift does not require banning the use of open-weight models or adopting a closed-source posture — on the contrary, there is a strong case to be made for the U.S. government to promote open-source AI. But American infrastructure should not facilitate the distribution of AI models that disproportionately benefit malign actors. Just because a cybercriminal can download a zero-day exploit from a dark web forum does not mean that same exploit should be hosted by Google or Amazon Web Services.

Economic Displacement

The growing popularity of Chinese AI models is creating a deeper strategic problem for the United States. American labs are spending prodigiously on compute: Hyperscalers collectively poured over $350 billion into AI infrastructure in 2025 alone, and Project Stargate envisions $500 billion in AI data centers over the next four years. This build-out assumes that frontier AI labs will continue to require massive, centralized clusters of expensive compute to serve their applications to global publics. But if Chinese models can deliver competitive performance running locally — on the computational equivalent of a Mac Mini, or through fractionally priced API calls — this may open a broader conversation about the marginal utility of hundreds of billions spent on infrastructure.

To be sure, the world remains silicon-constrained. There is strong reason to believe that serving the world’s billion-plus weekly AI users will demand enormous compute for the foreseeable future. But the proliferation of sparse attention model architectures pioneered in China may erode the value of U.S. infrastructure investments at the margins. If “good enough” AI is available for free, the premium will narrow for “best-in-class” AI that can only be served from an American data center.

Chinese open weight models are already starting to dominate in price-sensitive markets — even including elite Bay Area startups. In much of the Global South, DeepSeek and Qwen are becoming developer defaults. The decisions by Singapore’s national AI program to choose Qwen as its foundation, and Huawei’s integration with DeepSeek for African markets, could likewise mark the beginning of a nascent a Belt and Road strategy for AI infrastructure.

As the United States learned during the rollout of Huawei’s 5G equipment in the 2010s, the response to this threat cannot be defensive. Restricting Chinese models will not make American alternatives more attractive in markets where cost is the binding constraint. The reason Qwen and DeepSeek are gaining global market share is that they offer competitive performance at lower cost, with more permissive licenses. If the United States wants the Global South to build on Llama instead of Qwen, then U.S. models must remain better products — and they need to be widely accessible.

The Trump administration has recognized this threat and is beginning to build the institutional machinery to win the AI diffusion race. At the 2026 AI Impact Summit in New Delhi, Michael Kratsios, director of the White House Office of Science and Technology Policy, laid out an ambitious framework for exporting the American AI stack to partner nations. This includes an American AI Exports Program and industry-led consortia to deliver full-stack AI packages to partner nations, with the promise of expedited export licenses and a dedicated concierge service to broker deals. A National Champions Initiative will integrate partner nations’ leading AI companies into customized American export stacks — acknowledging space for allies’ sovereign AI ambitions even inside an “America First” technology strategy. A new U.S. Tech Corps, modeled on the Peace Corps, will deploy technical volunteers to developing countries for last-mile AI deployment in public services. Finally, the Treasury Department is offering new financing through the World Bank and the U.S. Development Finance Corporation to help select developers in the Global South acquire American AI.

The infrastructure for exporting American AI is taking shape. Its success will depend on ensuring there is a competitive open-weight or comparably low-cost American product to export.

Neither Prohibition Nor Laissez-Faire

Congress and the administration have begun responding to the opening of China’s digital floodgates. The No DeepSeek on Government Devices Act, if passed, would ban federal employees from using Chinese AI on government-provided hardware. The broader No Adversarial AI Act would extend prohibitions across all operations of federal agencies, and Virginia, Texas, and New York have already implemented state-level bans. Some of these measures are a sensible initial response, but each addresses only a portion of the problem set posed by Chinese AI — which spans poisoned supply chains, intelligence collection at scale, capability uplift for bad actors, and the degradation of America’s generational investment in computing power.

The best way to head off the various baskets of risk introduced by China’s AI boom is by implementing minimum security standards for models distributed on American infrastructure, requiring supply chain transparency, and grappling with the question of how to firewall off the risks that will emerge as Chinese models become embedded deeper into the digital infrastructure of the United States and its partners.

The United States needs to compete where it can, regulate where it must, and move fast enough that the choice between American and Chinese AI ecosystems remains a choice, not a foregone conclusion.

 

Ryan Fedasiuk is a fellow for China and Technology at the American Enterprise Institute and an adjunct professor at Georgetown University’s Security Studies Program. He previously served as an advisor for U.S.-China Bilateral Affairs at the U.S. Department of State.

Image:





Click Here For The Original Source.

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW