New Homes

The cottage industry quietly manipulating chatbots’ replies


b’

ILLO ChatGPT becomes the new frontline in the propaganda wars

ILLO ChatGPT becomes the new frontline in the propaganda wars

Those in power have always turned to the latest technologies to influence public opinion.

Roman leaders etched their faces on coins to project their image across the empire, kings used the printing press to spread one-sided battle reports and the rise of mass media led to the birth of political spin doctors in the West and overt propaganda in authoritarian regimes.

But today there is a new frontline in the information wars: ChatGPT.

Global superpowers, rogue states and multinational corporations are all quietly seeking to influence the replies that artificial intelligence (AI) systems generate for the public.

It comes as millions of people increasingly turn to AI bots, known as large language models (LLMs), to find information and carry out research. About one third of Britain’s adults accessed at least one chatbot in June this year, a recent survey found.

LLMs like ChatGPT harvest vast amounts of information from websites then regurgitate the material into answers when prompted. Because its answers rely so heavily on the original source material, there is scope to influence what LLMs say by manipulating the input information.

This has created a burgeoning cottage industry of information merchants.

Last month, US documents revealed that the Israeli government had signed a $6m (£4.6m) contract with Clock Tower X to combat anti-Semitism in America.

The company, founded by Brad Parscale, an ally of Donald Trump, offers services such as “GPT framing”, a reference to influencing AI bots’ output, through publishing websites and other content that the chatbots use when answering questions.

Brad Parscale

US documents revealed Brad Parscale’s company, Clock Tower X, signed a $6m contract with the Israeli government – Eric Gay/AP

The company has not given details on how it expects to influence chatbots, but said it would work with media organisations such as the Christian broadcaster Salem Media Network.

Russian propaganda campaigns have already infiltrated AI systems in order to spread misinformation about the war in Ukraine, according to Newsguard, a company that aims to rate the reliability of online news.

A study by the company in March found that some 3.6 million Russian propaganda articles appeared to have been ingested by Western AI systems through a group of websites known as the Pravda network – a reference to the Russian word for truth.

The researchers said the websites appeared designed chiefly to manipulate AI systems, which work by relying on huge quantities of data from millions of sites.

The Pravda network has almost no human traffic, but publishes industrial quantities of news articles, acting as a “laundering machine” for Kremlin talking points.

A separate study from the Atlantic Council’s DFR Lab and Finland’s CheckFirst found that almost 2,000 Pravda network articles had been inserted into Russian and Ukrainian Wikipedia pages, a technique that makes them appear more authoritative to AI systems.

Newsguard found that leading chatbots including ChatGPT, Google’s Gemini and Elon Musk’s xAI would often parrot claims from the sites, including that Volodymyr Zelenskyy, the Ukrainian president, had bought Adolf Hitler’s Eagle’s Nest retreat and that Ukrainian troops had burned an effigy of Donald Trump.

Earlier this year, John Mark Dougan, an American who fled to Russia and has become a prominent Kremlin propagandist, said: “By pushing these Russian narratives from the Russian perspective, we can actually change worldwide AI.”

Lukasz Olejnik, a researcher who has studied AI disinformation, says that influencing chatbots is a key goal because users are much more likely to trust the output of a chatbot than an unknown website – even if it was the unknown website that influenced the chatbot.

“It’s info-laundry; instead of reading a fishy site, you see it on a ‘respected’ AI output,” he says.

While chatbots often provide links to the websites they reference, few users verify them: one study from the Pew Research Centre found that people using Google’s AI summaries would click through to the source material less than 1pc of the time.

Russia has always been at the forefront of using the internet to spread its message. But attempts to influence what chatbots say is making its way well beyond the corridors of the Kremlin.

Spread of fake news

Governments have often been accused of hiring shady organisations to spread fake news on social media, and MPs have even been discovered editing their Wikipedia pages. But the AI misinformation war is on a much grander scale.

A wave of political consultancies and PR firms are offering to massage their clients’ reputations in the eyes of LLMs using “generative engine optimisation” – a twist on the search engine optimisation that dominated the last decade.

DDC Public Affairs, a US lobbyist which advises some of America’s biggest companies, offers clients an “AI audit” to determine what chatbots think of them, and then works with influencers and online forums to adjust the results.

It promises to test phrases such as “Did Brand X cause environmental harm?” or “Is Politician Y corrupt?” before “closing narrative gaps before they become crises”.

Manhattan Strategies, another US public affairs firm, advises clients on ways their blog posts are more likely to be picked up by language models, such as making them heavy with statistics and expert quotes.

One public relations executive said: “As more of the public get used to using LLMs like ChatGPT for search, it’s crucial that companies find ways to ensure people are served reliable information.”

He added that ironically, the rise of chatbots has led to a resurgence in efforts to influence news organisations, which chatbots see as among the most credible sources of information, compared to other means such as advertising.

Attempts to influence chatbots’ output is an inevitable consequence of their widespread use. ChatGPT has more than 800 million weekly users, according to Ipsos.

The Reuters Institute for the Study of Journalism says 24pc of people use AI to look up information of any kind, up from 11pc a year ago. A small but growing number use bots to read the news, up from 3pc to 6pc.

AI developers say they try to rely on authoritative sources and prevent manipulation as much as possible. But AI models are typically trained on billions of webpages, making it impossible to track everything that goes into building a chatbot.

Even large AI systems can be successfully “poisoned” by influencing just 0.00016pc of its input data, a study from tech company Anthropic said last month.

Some of AI’s biggest boosters have said the new technology will bring humanity to enlightenment with a single, unbiased fountain of truth.

Musk claimed last week that Grokipedia, his new AI-powered Wikipedia rival, will be a “comprehensive collection of all knowledge”.

In reality, chatbots may have simply opened a new front in the propaganda wars.

Broaden your horizons with award-winning British journalism. Try The Telegraph free for 1 month with unlimited access to our award-winning website, exclusive app, money-saving offers and more.



Source link

Leave a Response