OpenAI has banned a group of ChatGPT accounts linked to an Iranian influence operation that was generating content about the U.S. presidential election, according to a blog post Friday. The company says the operation created AI-generated articles and social media posts, though it doesn’t appear to have reached a large audience.
This isn’t the first time OpenAI has banned accounts linked to state-affiliated actors using ChatGPT maliciously. In May, the company took down five campaigns that used ChatGPT to manipulate public opinion.
These incidents are reminiscent of state actors using social media platforms like Facebook and Twitter to attempt to influence elections in previous election cycles. Now, similar (or perhaps the same) groups are using generative AI to flood social media channels with disinformation. Similar to social media companies, OpenAI appears to be taking a whack-a-mole approach, banning accounts associated with these efforts as they emerge.
OpenAI says its investigation into this group of accounts benefited from a Microsoft Threat Intelligence report released last week, which identified the group (called Storm-2035) as part of a broader campaign to influence the U.S. election that has been ongoing since 2020.
Microsoft said Storm-2035 is an Iranian network with multiple news-mimicking sites that “actively engage groups of U.S. voters on opposite sides of the political spectrum with polarizing messages on issues such as U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.” The playbook, as has been shown in other operations, is not necessarily to promote one policy or another, but to sow dissent and conflict.
OpenAI identified five Storm-2035 websites, posing as both liberal and conservative news sources with convincing domain names like “evenpolitics.com.” The group used ChatGPT to craft several long-form articles, including one claiming that “X censors Trump’s tweets,” which Elon Musk’s platform certainly hasn’t done (if anything, Musk is encouraging former President Donald Trump to engage more on X).
On social media, OpenAI identified a dozen X-accounts and one Instagram account controlled by this operation. The company says ChatGPT was used to rewrite various political comments, which were then posted on these platforms. One such tweet falsely and confusingly claimed that Kamala Harris blames “rising immigration costs” on climate change, followed by “#DumpKamala.”
OpenAI says it has seen no evidence that Storm-2035’s articles have been widely shared, and noted that most of its social media posts have received few or no likes, shares, or comments. That’s often the case with these operations, which are quick and cheap to launch using AI tools like ChatGPT. Expect to see many more notifications like this as the election approaches and online bickering between the parties intensifies.