[LUM#22] Generative AI for fractures
What if the widespread use of generative AI systems (siag) in society and universities were to widen the gap between an educated, aware, and critical elite and a majority that is indifferent to sources and truth and potentially susceptible to manipulation? This is one of the worrying but likely scenarios according to a panel of 40 experts in a study conducted by Montpellier Recherche Management.

On November 30, 2022, the general public discovered the ChatGPT robot and its potential, but also the controversies it immediately sparked. In January 2023, New York banned generative AI from all its schools. In Paris, Sciences Po was the first to take action, prohibiting its use for written or oral assessments. "Everywhere, this sensational arrival sparked the same debate: will we be able to work with AI or will we have to work despite AI?" recalls Florence Rodhain, a researcher at MRM.1 and co-author of a study on the possible pedagogical and strategic impacts of generative AI systems (siag) in higher education.
Prompt oracle
Starting in April 2023, 40 French-speaking AI experts, most of them academics, agreed to participate in a large-scale Delphi survey. Inspired by the Pythia of Delphi, famous for her oracles, this method aims to encourage debate by submitting proposals to experts for discussion, which, through iteration, leads them to define possible future scenarios. "This method was originally developed by the US military to plan the logistics of the 1944 landings," recalls Bernard Fallery, emeritus researcher at MRM and co-author of the study. "It is both flexible and precise, with milestones set throughout the process."
Based on extensive bibliographic research, the researchers formulated 20 proposals concerning the use of SIAGs and their consequences in higher education. They then submitted these proposals to the panel members, asking them to express their degree of agreement or disagreement with each one, and then, in a second round, to rank them according to their level of importance. "They had the opportunity to comment on these proposals in order to develop them further, which is often more revealing than the responses themselves. By discovering the responses of others after each round, some were able to modify their opinions in order to move towards a possible consensus," explains Florence Rodhain.
Three little turns and then they're gone!
The 40 experts agreed on seven proposals, which they also deemed important. Four of these can be summarized by the idea that AI systems will define new modes of learning that students must be trained in as quickly as possible, which requires time and budgets. There was also consensus on the need to moderate cultural stereotypes in AI training and not to halt AI research. "The seventh point of agreement is more worrying," warns Florence Rodhain. It expresses the fear of seeing a widening gap between an elite educated by demanding sources of quality and a majority fed by information that is plausible but totally indifferent to the truth, at best a soup of nonsense, at worst bullshit or deep fakes," emphasizes the Polytech professor, who says she also shares this fear.
The experts disagreed on six proposals, three of which were considered important. The first considers that the writing skills of SIAGs do not fall within the realm of creativity. "They fought over this. We wanted to distinguish between creation and creativity, but some people don't see the point," the researcher continues. Another point of disagreement was the requirement to certify all scientific research or mediation work as "SIAG-free." Another was the risk that AI could cause a cognitive revolution by dissociating the accumulation of knowledge from the understanding of phenomena. " ChatGPT can provide plausible and convincing answers on high-level topics; it can certainly predict, but without explaining or understanding. For some, including scientists, this is not a problem, " notes Bernard Fallery. "For my part, I continue to believe that science progresses through modeling and not through prediction."
From utopia to dystopia
Based on these agreements and disagreements, the researchers constructed three scenarios and presented them to the experts, asking them to classify them as probable or improbable, desirable or undesirable. Scenario A, called "evolution," was deemed the most probable and desirable. "This was the expected result. Experts believe that gradual regulatory processes will be put in place and that we will integrate SIAGs into our learning while limiting the risks," explains Bernard Fallery.
The surprise came from scenario C, called "fractures and collective protests." "When we wrote it, we thought we had really 'loaded the mule', and yet the majority of experts find it probable," says Florence Rodhain with surprise. Echoing fears of division, it anticipates increased exclusion and discrimination linked to highly unequal access to AI. "It is the threat of a split between students from the upper echelons of French society, who will have acquired the fundamentals and will know how to use AI to enhance themselves, and all the others who will have been deprived of this basic knowledge and who will be diminished by AI," concludes the researcher.
To go further:
- Read the study Generative AI will widen divisions in society
- Listen to Florence Rodhain and Salloua Zgoulli presenting this study on the program A l’UM la science (Science at UM).
Find UM podcasts now available on your favorite platform (Spotify, Deezer, Apple Podcasts, Amazon Music, etc.).
- MRM (UM, UPVD)
↩︎