OpenAI says it disrupted Chinese, Russian and Israeli influence campaigns | Technology

ChatGPT’s creator says influence operations have failed to gain traction or reach large audiences.

Artificial intelligence company OpenAI has announced that it has disrupted covert influence campaigns from Russia, China, Israel and Iran.

The creator of ChatGPT said Thursday that it identified five campaigns that involve “deceptive attempts to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them.”

The campaigns used OpenAI models to generate text and images that were published on social media platforms such as Telegram, X and Instagram, and in some cases exploited the tools to produce content with “fewer language errors than would have been possible for operators humans”. OpenAI said.

Open AI said it terminated accounts associated with two Russian operations, dubbed Bad Grammer and Doppelganger; a Chinese campaign known as Spamouflage; an Iranian network called the International Virtual Media Union; and an Israeli operation called Zero Zeno.

“We are committed to developing safe and responsible AI, which means designing our models with security in mind and proactively intervening against malicious use,” the California-based startup said in a statement posted on its website. .

“Detecting and disrupting abuse across multiple platforms, such as covert influence operations, can be challenging because we don’t always know how the content generated by our products is distributed. But we are dedicated to finding and mitigating this abuse at scale by harnessing the power of generative AI.”

Bad Grammar and Doppelganger largely generated content about the war in Ukraine, including narratives that portray Ukraine, the United States, NATO and the European Union in a negative light, according to OpenAI.

Spamouflage generated text in Chinese, English, Japanese and Korean that criticized prominent critics of Beijing, including actor and Tibet activist Richard Gere and dissident Cai Xia, and highlighted abuses against Native Americans, according to the startup.

The International Virtual Media Union generated and translated articles critical of the United States and Israel, while Zero Zeno targeted the United Nations agency for Palestinian refugees and “radical Islamists” in Canada, OpenAI said.

Despite efforts to influence public discourse, operations “do not appear to have benefited from increased audience engagement or reach as a result of our services,” the firm said.

The possibility of AI being used to spread disinformation has become a major topic of conversation as voters in more than 50 countries cast their ballots in what has been called the most important election year in history.

Last week, authorities in the US state of New Hampshire announced that they had charged a Democratic Party political consultant with more than two dozen counts for allegedly orchestrating robocalls that used an AI-created impersonation of US President Joe Biden to urge voters not to vote in the state’s presidential primary.

In the run-up to Pakistan’s parliamentary elections in February, jailed former prime minister Imran Khan used artificial intelligence-generated speeches to rally his supporters amid a government ban on public rallies.

Leave a Comment