{"id":36756,"date":"2025-10-28T08:01:13","date_gmt":"2025-10-28T08:01:13","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/chatbots-are-pushing-sanctioned-russian-propaganda\/"},"modified":"2025-10-28T08:01:13","modified_gmt":"2025-10-28T08:01:13","slug":"chatbots-are-pushing-sanctioned-russian-propaganda","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/chatbots-are-pushing-sanctioned-russian-propaganda\/","title":{"rendered":"Chatbots Are Pushing Sanctioned Russian Propaganda"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>OpenAI\u2019s ChatGPT, Google\u2019s Gemini, DeepSeek, and xAI\u2019s Grok are pushing Russian state propaganda from sanctioned entities\u2014including citations from Russian state media, sites tied to Russian intelligence or pro-Kremlin narratives\u2014when asked about the war against Ukraine, according to a new report.<\/p>\n<p>Researchers from the Institute of Strategic Dialogue (ISD) claim that Russian propaganda has targeted and exploited data voids\u2014where searches for real-time data provide few results from legitimate sources\u2014to promote false and misleading information. Almost one-fifth of responses to questions about Russia\u2019s war in Ukraine, across the four chatbots they tested, cited Russian state-attributed sources, the ISD research claims.<\/p>\n<p>\u201cIt raises questions regarding how chatbots should deal when referencing these sources, considering many of them are sanctioned in the EU,\u201d says Pablo Maristany de las Casas, an analyst at the ISD who led the research. The findings raise serious questions about the ability of large language models (LLMs) to restrict sanctioned media in the EU, which is a growing concern as more people use AI chatbots as an alternative to search engines to find information in real time, the ISD claims. For the six-month period ending September 30, 2025, ChatGPT search had approximately 120.4 million average monthly active recipients in the European Union, according to OpenAI data.<\/p>\n<p>The researchers asked the chatbots 300 neutral, biased, and \u201cmalicious\u201d questions relating to the perception of NATO, peace talks, Ukraine\u2019s military recruitment, Ukrainian refugees, and war crimes committed during the Russian invasion of Ukraine. The researchers used separate accounts for each query in English, Spanish, French, German, and Italian in an experiment in July. The same propaganda issues are still present in October, Maristany de las Casas says.<\/p>\n<p>Amid widespread sanctions imposed on Russia since its full-scale invasion of Ukraine in February 2022, European officials have sanctioned at least 27 Russian media sources for spreading disinformation and distorting facts as part of its \u201cstrategy of destabilizing\u201d Europe and other nations.<\/p>\n<p>The ISD research says chatbots cited Sputnik Globe, Sputnik China, RT (formerly Russia Today), EADaily, the Strategic Culture Foundation, and the R-FBI. Some of the chatbots also cited Russian disinformation networks and Russian journalists or influencers that amplified Kremlin narratives, the research says. Similar previous research has also found 10 of the most popular chatbots mimicking Russian narratives.<\/p>\n<p>OpenAI spokesperson Kate Waters tells WIRED in a statement that the company takes steps \u201cto prevent people from using ChatGPT to spread false or misleading information, including such content linked to state-backed actors,\u201d adding that these are long-standing issues that the company is attempting to address by improving its model and platforms.<\/p>\n<p>\u201cThe research in this report appears to reference search results drawn from the internet as a result of specific queries, which are clearly identified. It should not be confused with, or represented as referencing responses purely generated by OpenAI&#039;s models, outside of our search functionality,\u201d Waters says. \u201cWe think this clarification is important as this is not an issue of model manipulation.\u201d<\/p>\n<p>Neither Google nor DeepSeek responded to WIRED\u2019s request for comment. An email from Elon Musk\u2019s xAI said: \u201cLegacy Media Lies.\u201d<\/p>\n<p>In a written statement, a spokesperson for the Russian Embassy in London said that it was \u201cnot aware\u201d of the specific cases that this report details but that it opposes any attempts to censor or restrict content on political grounds. \u201cRepression against Russian media outlets and alternative points of view deprives those who seek to form their own independent opinions of this opportunity and undermines the very principles of free expression and pluralism that Western governments claim to uphold,\u201d the spokesperson wrote.<\/p>\n<p>\u201cIt is up to the relevant providers to block access to websites of outlets covered by the sanctions, including subdomains or newly created domains and up to the relevant national authorities to take any required accompanying regulatory measures,\u201d says a European Commission spokesperson. \u201cWe are in contact with the national authorities on this matter.\u201d<\/p>\n<p>Lukasz Olejnik, an independent consultant and visiting senior research fellow at King\u2019s College London\u2019s Department of War Studies, says the findings \u201cvalidate\u201d and help contextualize how Russia is targeting the West\u2019s information ecosystem. \u201cAs LLMs become the go-to reference tool, from finding information to validating concepts, targeting and attacking this element of information infrastructure is a smart move,\u201d Olejnik says. \u201cFrom the EU and US point of view, this clearly highlights the danger.\u201d<\/p>\n<p>Since Russia invaded Ukraine, the Kremlin has moved to control and restrict the free flow of information inside Russia: banning independent media, increasing censorship, curtailing civil society groups, and building more state-controlled tech. At the same time, some of the country\u2019s disinformation networks have ramped up activity and adopted AI tools to supercharge production of fake images, videos, and websites.<\/p>\n<p>Across the ISD\u2019s findings, around 18 percent of all prompts, languages, and LLMs returned results linked to state-funded Russian media, sites \u201clinked to\u201d Russia\u2019s intelligence agencies, or disinformation networks, the research says. Questions about peace talks between Russia and Ukraine led to more citations of \u201cstate-attributed sources\u201d than questions about Ukrainian refugees, for instance.<\/p>\n<p>The ISD\u2019s research claims that the chatbots displayed confirmation bias: The more biased or malicious the query, the more frequently the chatbots would deliver Russian state-attributed information. The malicious queries delivered Russian state-attributed content a quarter of the time, biased queries provided pro-Russian content 18 percent of the time, while neutral queries were just over 10 percent. (In the research, malicious questions to chatbots \u201cdemanded\u201d answers to back up an existing opinion, whereas \u201cbiased\u201d questions were leading but more open ended).<\/p>\n<p>Of the four chatbots, which are all popular in Europe and collect data in real time, ChatGPT cited the most Russian sources and was most influenced by biased queries, the research claims. Grok often linked to social media accounts that promoted and amplified Kremlin narratives, whereas DeepSeek sometimes produced large volumes of Russian state-attributed content. The researchers say Google\u2019s Gemini \u201c\u200b\u200bfrequently\u201d displayed safety warnings next to the findings and had the overall best results out of the chatbots they tested.<\/p>\n<p>Multiple reports this year have claimed a Russian disinformation network dubbed \u201cPravda\u201d has flooded the web and social media with millions of articles as part of an effort to \u201cpoison\u201d LLMs and influence their outputs. \u201cHaving Russian disinformation be parroted by a Western AI model gives that false narrative a lot more visibility and authority, which further allows these bad actors to achieve their goals,\u201d says McKenzie Sadeghi, a researcher and editor at media watchdog company NewsGuard, who has studied the Pravda network and Russian propaganda\u2019s influence on chatbots. (Only two links in the ISD research could be connected back to the Pravda network, the findings say).<\/p>\n<p>Sadeghi claims the Pravda network in particular is quick to launch new domains where propaganda is published and says it can be particularly successful when there is little reliable information on a subject\u2014the so-called data voids. \u201cEspecially related to the conflict [in Ukraine], they\u2019ll take a term where there\u2019s no existing reliable information about that particular topic or individual on the web and flood it with false information,\u201d Sadeghi says. \u201cIt would require implementing continuous guardrails in order to really stay on top of that network.\u201d<\/p>\n<p>Chatbots may come under more pressure from EU regulators as their user base grows. In fact, ChatGPT may have already hit the threshold to be designated a Very Large Online Platform (VLOP) by the EU once it hits 45 million average monthly users. This status triggers specific rules that tackle the risk of illegal content and their impact on fundamental rights, public security, and well-being on those sites.<\/p>\n<p>Even without qualifying for specific regulation, the ISD\u2019s Maristany de las Casas argues that there should be a consensus across companies of what sources should not be referenced or should not appear on these platforms when they are linked to foreign states known for disinformation. \u201cIt could be providing users with further context, making sure that users understand the times that these domains have a conflict and even understanding why they\u2019re sanctioned in the EU,\u201d he says. \u201cIt\u2019s not only an issue of removal, it\u2019s an issue of contextualizing further to help the user understand the sources they\u2019re consuming, especially if these sources are appearing amongst trusted, verified sources.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story OpenAI\u2019s ChatGPT, Google\u2019s Gemini, DeepSeek, and xAI\u2019s Grok are pushing Russian state propaganda from sanctioned entities\u2014including citations from Russian state media, sites tied to Russian intelligence or pro-Kremlin narratives\u2014when asked about the war against Ukraine, according to a new report. Researchers from the Institute of Strategic Dialogue (ISD) [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":36757,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-36756","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/36756","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=36756"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/36756\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/36757"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=36756"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=36756"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=36756"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}