A Chinese law enforcement official accidentally exposed a sprawling global intimidation and influence network by using ChatGPT as a kind of private operations diary, according to a new threat intelligence report from OpenAI. The covert campaign focused on tracking, harassing, and psychologically pressuring Chinese dissidents abroad, and even attempted to smear foreign leaders who criticized Beijing.
ChatGPT Used as a “Digital Diary” of Covert Operations
OpenAI’s investigators traced a single ChatGPT account — now banned — that was used to log “cyber special operations” in striking detail, including targets, tactics, and progress updates. Those logs were then mapped to real activity carried out by networks of fake accounts across major social media platforms.
The operation involved hundreds of human operators and thousands of inauthentic accounts posting in multiple languages, giving one of the clearest views yet of how Chinese authorities industrialize online repression. OpenAI’s Ben Nimmo described it as “modern transnational repression” that aims to hit critics of the Chinese Communist Party “with everything, everywhere, all at once.”
Harassment, Fake Obituaries, and Mass Reporting
The report details a playbook that stretches well beyond simple trolling or propaganda posts. Documented tactics include:
- Impersonating U.S. immigration officials to intimidate a Chinese dissident living in the United States.
- Forging documents that appeared to come from a U.S. county court in an effort to get a dissident’s social media account taken down.
- Filing thousands of bogus platform abuse reports to pressure companies into suspending accounts critical of Beijing.
- Targeting dissidents’ mental health and families through persistent harassment and threats.
In one particularly disturbing episode, operators fabricated an obituary and gravestone photos falsely claiming that dissident Jie Lijian had died, then amplified the content across multiple platforms to spread the lie. OpenAI says versions of that smear had circulated online as early as 2023.
A Botched Smear Operation Against Japan’s Prime Minister
The same ChatGPT user also tried to weaponize AI against Japanese Prime Minister Sanae Takaichi after she criticized the Chinese Communist Party over human rights abuses in Inner Mongolia. The user asked ChatGPT to draft a multi-step plan to discredit Takaichi, including:
- Amplifying negative social media comments about her.
- Using fake email accounts to flood Japanese politicians and institutions with complaints.
- Exploiting anger over U.S. tariffs on Japanese goods to inflame public sentiment against her.
ChatGPT refused to generate the requested disinformation strategy. But weeks later, status updates submitted by the same user indicated the campaign had gone forward using other AI tools, including locally run Chinese models such as DeepSeek.
OpenAI says hashtags consistent with the planned operation, including one roughly translated as “right-wing symbiont,” began appearing in small volumes on platforms like X, Pixiv, and Blogspot from late October 2025, though they failed to gain meaningful traction. Separate research by Japanese analysts has pointed to broader China-linked disinformation efforts targeting Takaichi during Japan’s recent elections, supported by thousands of fake accounts.
AI at the Center of a Geopolitical Contest
The revelations land amid escalating competition between Washington and Beijing over who will define the rules and uses of advanced artificial intelligence. Michael Horowitz, a former Pentagon official now at the University of Pennsylvania, said the findings “clearly demonstrate the way that China is actively employing AI tools to enhance information operations.”
According to Horowitz, the race is not only about frontier model capabilities, but also about how governments integrate AI into everyday systems of surveillance, censorship, and information control. Independent researchers have similarly warned that AI is becoming the backbone of a more expansive, predictive form of authoritarian governance in China, allowing authorities to monitor more people more closely with less effort.
OpenAI’s report underscores that while major Western platforms are beginning to detect and disrupt such campaigns, state-backed operators are already shifting to a mixed toolkit of Western and domestic AI models — and experimenting in real time with industrial-scale digital repression.
