[LUM#22] When AI takes over, fake news dies out
COVID, vaccination, global warming... Scientific topics are omnipresent in major debates, especially on social media.
How can we separate fact from fiction in all these claims? At Lirmm, we are tackling this issue with artificial intelligence.

Far from being confined to specialist journals, science now permeates tweets, posts, and comments of all kinds. "It's a fact that science is an integral part of social media discourse. Everyone talks about it, either to lend weight to their arguments or in response to societal anxieties," explains Konstantin Todorov, a researcher at the Montpellier Laboratory of Computer Science, Robotics, and Microelectronics1.
One observation leads to another: scientific facts are often presented in a simplified, decontextualized, and misleading manner. "We observed this phenomenon a lot, particularly during the COVID-19 epidemic, when numerous pseudo-scientific claims circulated on the web, spreading bias or misinformation," recalls Sandra Bringay, a researcher at LIRMM. The mechanisms inherent in online platforms mean that controversial or false statements generate more interaction and interest, " adds the specialist.
Combating misinformation
In this context, how can we combat misinformation and improve understanding of complex scientific issues? For both researchers, the answer lies in artificial intelligence. Together with Salim Hafid, a PhD student at Lirmm, they propose a hybrid AI approach dedicated to interpreting scientific discourse online. Their goal is to detect and classify scientific statements in data from social media.
As part of the Franco-German AI4Sci project, they have access to a huge database containing all tweets posted on Twitter's predecessor, "a vast corpus that we were able to access thanks to a collaboration with the German laboratory Gesis, a partner in the project," explains Konstantin Todorov. The computer scientists used this data for machine learning, employing what are known as large language models, which enable them to associate concepts with text.
Provide guidance
"The idea is to teach the machine to recognize a scientific assertion, for example by checking whether there are references, publications, certain combinations of words, the quality of the source, etc. And to place it in the accompanying media and scientific context," explains Sandra Bringay.
And to check whether these statements are true or false? "In public discourse, what matters more than knowing whether the information is true is to better understand how it is used," replies Konstantin Todorov. "The goal is to move towards tools that give users flags, in other words, benchmarks, to facilitate good reading practices."
Researchers are collaborating on this project with sociologists and journalists, among others, with a broader goal: to combat manipulation strategies and help create a healthy public and democratic discourse.
Find UM podcasts now available on your favorite platform (Spotify, Deezer, Apple Podcasts, Amazon Music, etc.).
- Lirmm (UM, CNRS, Inria, UPVD, UPVM) ↩︎