Fracture-generating AI

What if the widespread use of generative AI systems (siag) in society and academia were to accentuate the divide between a trained, aware and critical elite and a majority indifferent to sources and truth, and potentially manipulated? This is a worrying but probable scenario, according to a panel of 40 experts in a study carried out at Montpellier research management.

On November 30, 2022, the general public discovered the ChatGPT robot and its potential, but also the controversy it immediately aroused. In January 2023, New York banned generative AI from all its schools. In Paris, Science-Po was the first to pull the plug, banning the use of AI for written and oral assessments. " Everywhere, this sudden arrival sparked the same debate: are we going to be able to work with AI, or are we going to have to work in spite of AI?" recalls Florence Rodhain, a researcher at MRM.1 and co-author of a study on the possible pedagogical and strategic impacts of generative AI systems (siag) in higher education?

Prompt oracle

From April 2023, 40 French-speaking AI experts, most of them academics, agree to take part in a major Delphi survey. Inspired by the Pythia of Delphi, made famous by her oracles, this method aims to encourage debate by submitting proposals to experts, who discuss them and, through iteration, define possible future scenarios. It's a method originally developed by the U.S. Army to consider the logistics of the D-Day landings in 1944," recalls Bernard Fallery, researcher emeritus at MRM and co-author of the study. It is both flexible and precise, with milestones set throughout the process.

On the basis of an extensive literature search, the researchers formulated 20 propositions concerning the use of siag and its consequences in higher education. They then submitted them to the panel members, asking them to express their degree of agreement or disagreement with each one, then in a second round to rank them according to their level of importance. "They were given the opportunity to comment on these proposals, which is often more revealing than the answers themselves. By discovering the answers of others after each round, some were able to modify their opinions to move towards a possible consensus," explains Florence Rodhain.

Three little turns and that's it!

The 40 experts agreed on seven proposals, which they also considered important. Four can be summed up by the idea that AI systems will define new ways of learning, for which students need to be trained as quickly as possible, which requires time and budgets. The need to moderate cultural stereotypes in AI training, and the fact that AI research should not be halted, are also points of consensus. The seventh point of agreement is more worrying," warns Florence Rodhain. It expresses the fear of seeing a growing divide between an elite trained by demanding sources of quality and a majority fed plausible information but totally indifferent to the truth, at best a soup of twaddle, at worst bullshit or deep fakes," stresses the Polytech lecturer who also says she shares this fear.

The experts disagreed on six proposals, three of which were deemed important. The first was that siag's writing skills are not creativity. "That's what they argued about. We wanted to make a distinction between creation and creativity, but some people think there's no reason to do so," continues the researcher. Another point of disagreement was the obligation to certify all research and scientific mediation work "without siag". Another is the risk that AI will provoke a cognitive revolution by dissociating the accumulation of knowledge from the understanding of phenomena. "ChatGPT can provide plausible and convincing answers on high-level subjects, it can certainly predict but without explaining or understanding. For some people, including scientists, this is not a problem," notes Bernard Fallery. For my part, I still believe that science progresses by modeling, not by prediction.

From utopia to dystopia

Based on these agreements and disagreements, the researchers constructed three scenarios and proposed them to the experts, this time asking them to classify them as probable or improbable, desirable or undesirable. Scenario A, called "evolution", was deemed the most probable and desirable. " This was the expected outcome: the experts believe that progressive regulation processes will be put in place, and that we will integrate siag into our learning processes, while limiting the risks", explains Bernard Fallery.

The surprise came in the form of scenario C, "fracking and collective protest". " When we drafted it, we thought we'd really 'loaded the mule', and yet the majority of experts find it probable," says an astonished Florence Rodhain. Taking up the fear of fracture, it anticipates the reinforcement of exclusion and discrimination linked to a highly unequal appropriation of siag. " It's the threat of a split between the students of France's upper classes, who will have acquired the fundamentals and know how to use AI to improve themselves, and all the others who will have been deprived of this acquisition of basic knowledge, and who will themselves be diminished by AI," concludes the researcher.

  1. MRM (UM, UPVD)