Major AI chatbots regurgitate CCP propaganda

Major AI chatbots regurgitate CCP propaganda

A shocking new investigation has revealed that the world’s advanced AI chatbots—including ChatGPT, Microsoft Copilot, Google Gemini, and others—are sometimes repeating Chinese Communist Party (CCP) narratives, filtering out facts, and even censoring dissenting points of view. The report from the American Security Project (ASP) warns that the global AI landscape is being quietly altered by state-sponsored censorship, and that most consumers have no clue.

While investigating the bias of these AI models, what ASP researchers found was much deeper than bias—they discovered that the training data used to power modern AI systems was potentially rife with disinformation originating from CCP propaganda. In short, your AI assistant could be offering fact-based responses produced by propaganda machines.

The Covert Impact of Chinese Censorship on AI

AI chatbots are dependent on large datasets that are scraped from the internet, either legally or illegally, so they can mimic how people communicate, answer questions, and provide information more generally. However, the use of censored data and biased media by oppressive regimes—such as the CCP—within the same data channels means the models will retain that influence much in the same way they register Wikipedia entries or Reddit threads.

In the ASP’s report, five AI models, having been prompted with topics considered sensitive by Beijing, showed some element of censorship/bias. The five models (OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, DeepSeek’s R1, and xAI’s Grok) all showed elements of censorship or bias. Additionally, the divergence between their findings in English and Chinese was often significant and ironic.

Most Compliant? Microsoft’s Copilot

Of the five models reviewed, Microsoft’s Copilot was listed as the most compliant with Chinese Communist Party (CCP) propaganda, and when asked to answer questions in Simplified Chinese, the result was even worse. The investigators observed that Copilot was often passing off disinformation as fact, avoiding inconvenient facts altogether, and making light of or rewording serious human rights issues.

Why would a company based in the US have such an affinity for the line taken by the Chinese state? The report discusses the business interests for Microsoft in China, which include five major data centres in China, and to remain compliant with the PRC’s 2019 AI regulations, companies must ensure the AI “upholds core socialist values” and avoids mentioning politically sensitive topics unless according to the state-sanctioned narrative.

That is to say, censorship is not likely; it is centrally built into the business model.

Major AI chatbots regurgitate CCP propaganda

Language Matters: Two Realities, One AI

The ASP identified stark differences in AI models’ responses depending on whether the questions were posed in English or Chinese.

  • COVID-19 Origins

In English, most models described a scientific consensus (a natural spillover or possible lab leak in Wuhan, China).

In Chinese, all the models described the origins as “unclear,” with some models, such as Gemini, echoing CCP talking points about the virus possibly appearing in the U.S. or France before Wuhan.

  • Hong Kong Protests and Civil Liberties

In English, the models acknowledged the post-2019 internal and external pressures limiting freedoms in Hong Kong with independent global indexes.

In Chinese, the same chatbots avoided any critical language and referred to the protests as “foreign-influenced disturbances,” while changing the focus of their responses to economic growth as opposed to political repression.

  • Tiananmen Square Massacre

All models except DeepSeek referred to the event as a massacre, but only in English.

In Chinese, there were euphemisms, such as “June Fourth Incident,” or redirection to other topics. Copilot, for example, sidestepped the entire topic, only to respond with generic travel advice.

  • Uyghur Oppression

Responses in English, tentatively addressing international outcry.

In Chinese, responses reframed topics to emphasize “counterterrorism” or “stability,” and some models linked users with official CCP websites for further reading or information.

AI: The New Battlefield for Global Influence

These findings have far-reaching consequences.  Artificial intelligence chatbots are quickly becoming the core infrastructure, not just for tech companies, but also for media, education, customer service, and potentially even government.

If it emerges (as is likely) that these systems have been trained unknowingly on disinformation or designed actually to hedge difficult truths, these systems could normalize falsehoods and damage trust in democratic institutions.  The situation becomes even more dire if the AIs demonstrate bias in other domains such as defense, intelligence, or policy-making—outcomes could be disastrous.

The warning from the ASP report offers: “An unaligned AI that reflects the views of a geopolitical adversary has the potential to fracture U.S. national security and undermine democratic governance.”

Major AI chatbots regurgitate CCP propaganda

Propaganda by Design—or by Default?

This is not solely about the government inserting disinformation into our information ecosystem—it’s also about a lack of controls on the inverse. Right now, the AI race is characterized by quantitative performance, product launches, and market share, often neglecting the integrity of data.

China’s global propaganda project further complicates these challenges. Through means such as astroturfing (passing themselves off as citizens of other states), data laundering (repurposing false content as Western news), and search engine optimization, it becomes very challenging for any form of content—including CCP propaganda—to be differentiated from other legitimate content—especially when analyzing for machine learning purposes.

Consequently, even if the model doesn’t ingest contaminated information while operating outside of China, it is still possible that contaminated data could be injected into the training dataset from information within China’s information ecosystem.

The silence of big tech—and what’s next

Only a few major tech companies have publicly addressed the issue of AI models. The clear conversion of the leaked documents to illustrate investigative horrors was glaring: China is a huge market; a company can see a lot of major revenue if after a 10-hour workday, a 1-hour ride to AI-relevant tech and affordable food, AI gets 10 hours a day of my while for $10 dollars/wk.

The report circles back to the demand for governments, civil society, and independent researchers for committed authorship and transparency on the training of AI models, issues like where it trains, and stated approaches to diminish its foreign influence.

ASP’s last stated desire? “A global dataset with verifiable high-integrity, with increasing accuracy,          diversity, and freedom of thought.” “Not achieved”? The authors inform us that “the West may lose this battle for truth in the age of AI.”

Major AI chatbots regurgitate CCP propaganda

A Probe into the Planet

As artificial intelligence (AI) becomes more ingrained in daily life—from news aggregation and educational platforms, to legal analysis systems, to autonomous systems—it must have integrity beyond reproach. The findings in ASP’s report should be an alarm bell for every developer, legislator, and citizen.

Because if our smartest machines start to adopt the outlooks of autocratic regimes, then we may not only lose control of AI, but we may also lose control of truth itself.

The only question we should all be grappling with now is quite simple, quite urgent, and quite global: Who do we want training our AIs—and what future are we building if we don’t intervene?

Leave a Comment