Since the election of Donald Trump in 2016, there has been a debate about how effective it has been the Russian propaganda to influence the opinions of US voters. It was well documented in those days that Russia used large IT companies, more infamously the Internet research agency that sounds Anodine, with the only mandate of producing divisive content and pro-russia aimed at Americans, but quantifying the impact has always been imprecise. Surely it has some impact, in the least, in the hardening of the opinions that fit the beliefs. Most people will not go through the work of verifying everything they read, and the community notes system is broken.
In any case, the Kremlin continues to use misinformation and a new report of Journalist He has documented the country’s pivot far from pointing to humans with content and, on the other hand, pursuing models of AI that many now use to avoid medium websites completely. According to Newsguard’s Research, a propaganda network called Pravda produced more than 3.6 million articles only in 2024, which found that they are now incorporated into the 10 largest AI models, including Chatgpt, XAI’s Grok and Microsoft Copilot.
There are more here:
The Newsguard audit discovered that chatbots operated by the 10 largest companies collectively repeated the false narratives of Russian misinformation 33.55 percent of the time, provided a non -response to 18.22 percent of the time, and an discreditor of 48.22 percent of the time.
The 10 chatbots repeated misinformation of the Pravda Network, and seven chatbots even cited Pravda specific articles as their sources.
Newsguard calls this new “AI Grooming” tactics, since the models depend more and more RAG, or the increased generation of recovery, to produce items using real -time information on the web. By turning websites on websites apparently legitimate, models are ingesting and regurgitating information that they do not understand is propaganda.
Newsguard cited a specific statement that Ukrainian President Volodymyr Zelensky prohibited social truth, the social network affiliated with President Trump. The accusation is demonstrably false, since President Trump’s company has never made the truth social in Ukraine. And yet:
Six of the 10 chatbots repeated the false narrative as a fact, in many cases citing articles from the Pravda Network. Chatbot 1 replied: “Zelensky prohibited social truth in Ukraine, according to reports, due to the dissemination of positions that criticized him on the platform. This action seems to be a response to the content perceived as hostile, possibly reflecting tensions or disagreements with the political figures and the associated views promoted through the platform ”.
Last year, American intelligence agencies linked Russia with viral misinformation He extended about the democratic vice presidential candidate Tim Walz. Microsoft said that a viral video that claimed that Harris left a woman paralyzed in an accident outrage 13 years ago was Russian misinformation.
And in the event that there is any doubt that Russia is participating in this type of behavior aimed at AI models, Newsguard made reference to a speech delivered last year to the Russian officials of John Mark Dougan, an American fugititive of the world “.
The last propaganda operation has been linked to an innocuous firm that sounds like you called Tigerweb, which have intelligence agencies linked to foreign interference and is based in Crimea controlled by the Russian. Experts have long said that Russia is based on third -party organizations to perform this type of work so that it can claim the ignorance of the practice. Tigerweb shares an IP address with propaganda websites that use the UKranian .Ua TLD.
Social networks, including X, have flooded with statements that President Zelensky has stolen military aid to enrich himself, another journalist cited as originated from these websites.

There is concern that those who control AI models will one day have power over individual opinions and life forms. Meta, Google and XAI are among those who control biases and the behavior of the models that hope they will feed the website. After Xai’s Grok model was criticized for being too “aroused”, Elon Musk dedicated himself to playing with him Model outputsOrdering the training personnel seeking “Woke ideology” and “cancels culture”, essentially suppressing the information with which you do not agree. Sam Altman de Openai recently said that it would make Chatgpt less restrictive in what he says.
The research has found that more than half of Google searches are “zero clicks”, which means they do not lead to a website. And many people on social networks have expressed their feeling that Rather looks at a general description of AI Then, click on a laziness website (Google began to implement a “AI mode” recently). The standard media literacy council, such as gutting a website to see if it seems legitimate, goes out the window when people are only reading AI summaries. The AI models continue to have iradicable defects, but people trust them because they write authorizedly.
Google has traditionally used several signals to classify the legitimacy of websites in the search. It is not clear how these signals apply on their AI models, but early gaffs suggest that their Gemini model has many problems to determine reputation. Most models still often cite less family websites along with well -known and credible sources.
All this occurs when President Trump has taken a combative position towards Ukraine, stopping the exchange of information and rebuking the leader at a White House meeting about the belief that he has not shown enough faith to the United States and a lack of will to surrender to Russian demands.
Read Newsguard’s full report here.
#Russia #preparing #global #models #cite #propaganda #sources