[LUM#22] When AI Takes Over, Fake News Is Doomed
COVID, vaccination, global warming… Scientific topics are everywhere in major public debates, especially on social media.
How can we separate the wheat from the chaff among all these claims? At LIRMM, we’re tackling this challenge with artificial intelligence.

Far from being confined to specialized journals, science now permeates tweets, posts, and comments of all kinds. “It’s a fact: science is an integral part of the discourse on social media; everyone is talking about it, whether to lend weight to arguments or in response to societal anxiety,” explains Konstantin TodoFrov, a researcher at the Montpellier Laboratory of Computer Science, Robotics, and Microelectronics1.
One observation leads to another: scientific facts are often presented in a simplified, decontextualized, and misleading way. “We observed this phenomenon extensively, particularly during the COVID-19 pandemic, when numerous pseudoscientific claims circulated online, spreading bias or misinformation,” recalls Sandra Bringay, a researcher at LIRMM. “The mechanisms inherent in online platforms mean that controversial or false statements generate more engagement and interest, ” adds the specialist.
Combating misinformation
In this context, how can we combat misinformation and improve understanding of complex scientific issues? For the two researchers, the answer lies in artificial intelligence. Together with Salim Hafid, a PhD student at Lirmm, they propose a hybrid AI approach designed to interpret scientific discourse online. Their goal: to detect and classify scientific claims in data from social media.
As part of the Franco-German AI4Sci project, they have access to a massive database containing all tweets posted on X’s predecessor, “a huge corpus that we were able to access thanks to a collaboration with the German research institute Gesis, a project partner,” explains Konstantin Todorov. The computer scientists used this data for machine learning, employing what are known as large language models, which allow them to associate concepts with the text.
Provide guidance
“The idea is to teach the machine to recognize a scientific claim—by checking, for example, whether there are references, publications, certain word combinations, or the quality of the source—and to place it within the accompanying media and scientific context,” explains Sandra Bringay.
And to verify whether these claims are true or false? “In public discourse, what matters more than knowing whether the information is true is better understanding how it is used,” replies Konstantin Todorov. “The goal is to move toward tools that provide users with flags—in other words, cues—to facilitate good reading practices.”
In particular, researchers are collaborating on this project with sociologists and journalists with a broader goal: to combat manipulative strategies and help foster a healthy, democratic public discourse.
UM podcasts are now available on your favorite platform (Spotify, Deezer, Apple Podcasts, Amazon Music, etc.).
- Lirmm (UM, CNRS, Inria, UPVD, UPVM) ↩︎