[LUM#22] A generative AI for fractures
What if the widespread use of generative AI systems (GAS) in society and in universities were to widen the gap between an educated, informed, and critical elite and a majority that is indifferent to sources and the truth and potentially susceptible to manipulation? This is one of the concerning but likely scenarios identified by a panel of 40 experts in a study conducted by Montpellier Recherche Management.

On November 30, 2022, the general public discovered the ChatGPT robot and its potential, but also the controversies it immediately sparked. As early as January 2023, New York banned generative AI from all its schools. In Paris, Sciences Po was the first to take action, banning its use for written or oral assessments. “Everywhere, this sensational arrival sparked the same debate: will we be able to work with AI, or will we have to work despite AI?” recalls Florence Rodhain, a researcher at MRM1 and co-author of a study on the potential pedagogical and strategic impacts of generative AI systems (siag) in higher education?
Oracle prompt
Starting in April 2023, 40 French-speaking AI experts—mostly academics—agreed to participate in a large-scale Delphi survey. Inspired by the Pythia of Delphi, famous for her oracles, this method aims to foster debate by presenting experts with proposals that they discuss; through this iterative process, they are led to define possible future scenarios. “It’s a method originally developed by the U.S. military to plan the logistics of the 1944 Normandy landings,” recalls Bernard Fallery, a senior researcher at MRM and also a co-author of the study . “It’s both flexible and precise, with milestones set throughout the process.”
Based on an extensive literature review, the researchers formulated 20 propositions regarding the use of SIAGs and their implications for higher education. They then presented these to the panel members, asking them to indicate their level of agreement or disagreement with each one, and in a second round, to rank them according to their level of importance. “They had the opportunity to comment on these statements to help refine them, which is often more revealing than the responses themselves. By seeing others’ responses after each round, some were able to adjust their views to move toward a potential consensus,” explains Florence Rodhain.
Three little spins and it's all over!
The 40 experts agreed on seven proposals, which they also deemed important. Four of these can be summarized by the idea that AI-driven systems will define new ways of learning that students must be trained in as soon as possible, which requires time and funding. There is also consensus on the need to moderate cultural stereotypes in AI training and on the importance of not halting AI research. “The seventh point of agreement is more concerning,” warns Florence Rodhain. “It expresses the fear of seeing a growing divide between an elite educated by high-quality, rigorous sources and a majority fed plausible information but completely indifferent to the truth—at best a mishmash of nonsense, at worst bullshit or deep fakes,” emphasizes Rodhain, who teaches at Polytech and says she shares this concern.
The experts disagreed on six proposals, three of which were deemed significant. The first holds that the writing skills of SIAGs do not fall under the category of creativity. “They really went at it over that one.” We wanted to distinguish between creation and creativity, but some believe there’s no reason to do so,” the researcher continues. Another point of disagreement is the requirement to certify any research or science communication work as “AI-free.” Or the risk that AI could trigger a cognitive revolution by separating the accumulation of knowledge from the understanding of phenomena. “ChatGPT can provide plausible and convincing answers on high-level topics; it can certainly predict, but without explaining or understanding. For some, including scientists, this isn’t a problem, ” notes Bernard Fallery. “For my part, I continue to believe that science progresses through modeling, not prediction.”
From Utopia to Dystopia
Based on these points of agreement and disagreement, the researchers developed three scenarios and presented them to the experts, asking them this time to rate them as likely or unlikely, desirable or undesirable. Scenario A, called “evolution,” was deemed the most likely and the most desirable. “That was the expected result; the experts believe that gradual regulatory processes will be put in place and that we will integrate SIAGs into our learning while limiting the risks,” explains Bernard Fallery.
The surprise came from Scenario C, titled “Fractures and Collective Protests.” “As we were writing it, we thought we’d really ‘loaded the mule,’ and yet most experts find it plausible,” Florence Rodhain remarks with surprise. Drawing on fears of social division, it anticipates an increase in exclusion and discrimination linked to a highly unequal distribution of SIAGs. “It’s the threat of a split between students from the upper echelons of French society—who will have acquired the fundamentals and know how to use AI to enhance themselves—and all the others, whom it will have deprived of this acquisition of basic knowledge and who, in turn, will be diminished by AI,” the researcher concludes .
For more information:
- Read the study : Generative AI Will Widen Social Divides
- Listen to Florence Rodhain and Salloua Zgoulli as they present this study on the program *A l’UM la science*
UM podcasts are now available on your favorite platform (Spotify, Deezer, Apple Podcasts, Amazon Music, etc.).
- MRM (UM, UPVD)
↩︎