OpenAI says it disrupted at least 10 malicious AI campaigns already this year

22 hours ago 1
OpenAI image
(Image credit: Shutterstock/JarTee)

  • OpenAI says it has disrupted numerous malicious campaigns using ChatGPT
  • These include employment scams and influence campaigns
  • Russia, China, and Iran are using ChatGPT to translate and generate content

OpenAI has revealed it has taken down a number of malicious campaigns using its AI offerings, including ChatGPT.

In a report titled, “Disrupting malicious uses of AI: June 2025,” OpenAI lays out how it dismantled or disrupted 10 employment scams, influence operations, and spam campaigns using ChatGPT in the first FEW months of 2025 alone.

Many of the campaigns were conducted by state-sponsored actors with links to China, Russia and Iran.

AI campaign disruption

Four of the campaigns disrupted by OpenAI appear to have originated in China, with their focus on social engineering, covert influence operations, and cyber threats.

One campaign, dubbed “Sneer Review” by OpenAI, saw the Taiwanese “Reversed Front” board game that includes resistance against the Chinese Communist Party spammed by highly critical Chinese comments.

The network behind the campaign then generated an article and posted it on a forum claiming that the game had received widespread backlash based on the critical comments in an effort to discredit both the game and Taiwanese independence.

Another campaign, named “Helgoland Bite”, saw Russian actors using ChatGPT to generate text in German that criticized the US and NATO, and generate content about the German 2025 election.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Most notably, the group also used ChatGPT to seek out opposition activists and bloggers, as well as generating messages that referenced coordinated social media posts and payments.

OpenAI has also banned numerous ChatGPT accounts linked to US targeted influence accounts in an operation known as “Uncle Spam”.

In many cases, Chinese actors would generate highly divisive content aimed at widening the political divide in the US, including creating social media accounts that posted arguments for and against tariffs, as well as generating accounts that mimicked US veteran support pages.

OpenAI’s report is a key reminder that not everything you see online is posted by an actual human being, and that the person you’ve picked an online fight with could be getting exactly what they want; engagement, outrage, and division.

You might also like

Benedict has been writing about security issues for over 7 years, first focusing on geopolitics and international relations while at the University of Buckingham. During this time he studied BA Politics with Journalism, for which he received a second-class honours (upper division), then continuing his studies at a postgraduate level, achieving a distinction in MA Security, Intelligence and Diplomacy. Upon joining TechRadar Pro as a Staff Writer, Benedict transitioned his focus towards cybersecurity, exploring state-sponsored threat actors, malware, social engineering, and national security. Benedict is also an expert on B2B security products, including firewalls, antivirus, endpoint security, and password management.

Read Entire Article