[LUM#22] “Everyone has their own definition of artificial intelligence!”
While AI is becoming increasingly present in our lives and our imaginations—bringing with it a host of legitimate questions and fantasies—its technological reality remains unclear to many of us. Neural networks, deep learning, algorithms… A brief overview of these concepts with Anne Laurent, director ofthe Montpellier Institute of Data Science and vice president for open science and research data at the University of Montpellier.

Technically speaking, how would you describe how AI works?
The general concept of AI refers to a machine’s ability to replicate our cognitive abilities (reasoning, learning, recognizing, creating, etc.). But the focus is often on learning. The idea is to teach a machine a concept by training it to distinguish between situations based on examples, just as humans do. There are many learning methods. The input training data—the examples—could, for instance, be photos of moles. The system must then determine whether or not the output is a melanoma.
But what happens between this input and this output?
The most powerful machine learning methods today are based on neural networks, which involve highly complex systems and processes. The basic concepts are similar to the way our own nervous system works. The input signal activates a neuron before being transmitted to the next one, and this happens across multiple levels—we refer to these as layers of neurons. Between the input and the output, the system decides whether or not to transmit the signal to the next neuron, when to transmit it, and so on, in order to accomplish its task. This is what is known as neural network learning or deep learning. “Deep” refers to the fact that there are many layers of neurons and connections that must be configured. For the machine to successfully complete its task, it must therefore be trained using massive amounts of data.
Where does this data come from?
When we talk about AI in research, we’re referring to research data derived, for example, from scientific imaging instruments, DNA sequencing, and so on. This data is generated either by the researcher using the AI or by other teams that have worked on the subject and agree to share it to facilitate this learning process. This is where the importance of open science lies.
Many mathematicians and computer scientists work on AI by creating new algorithms. What is an algorithm, and what is its role?
When you want a machine to do something, you have to speak to it in its language—a programming language like Python, Scala, or Java, for example. But you don’t just code or program without thinking. We start by writing an algorithm—that is, writing down in a formalized way, but in a language understandable to a human, the conceptual description of the data used and the operations we want the machine to perform, along with the conditions under which we’ll have it perform them.
If we were to define artificial intelligence, what might that definition be?
The term “AI” was coined in 1956; since then, it has evolved over time, with certain subfields developing more than others. In my view, to define AI, we must return to the original vision of its founding fathers, who believed it had the capacity to replicate our cognitive abilities: planning, reasoning, decision-making, learning, perceiving the world, and so on. But we must also take into account the recent breakthrough brought about by commercial generative AI systems. So for many people, AI equals generative AI; for others, AI equals statistics… To each their own definition of AI!
For the past few months, AI news has been dominated by ChatGPT, a generative AI. What is generative AI?
It’s a form of AI that can generate content—text, images, videos, and more. It relies on the input provided, to which it adds everything it has learned from its model. Users can guide it by providing context and a specific goal. For example, they might ask it to explain the causes of melanoma, but in a way that speaks to teenagers who will be spending their summer at the beach, or to their parents so they can better educate their teens. Generative AI contextualizes and adapts the message to the requested purpose. AI fascinates us because it speaks to all of us.
What other types of AI are present in our daily lives?
Some AI systems are designed for tasks known as “classification.” I mentioned moles earlier, but I could also have talked about Pl@ntNet, a Montpellier-based plant recognition app that can classify plants into thousands of existing categories within botanical taxonomy. There are also what are known as “segmenting” AI systems, which are widely used in marketing to categorize customers without preconceived notions. There are AI systems that plan and, for example, help organize schedules by solving highly complex combinatorial and constraint problems. Others provide recommendations, on online video platforms for instance… There are countless subcategories, and all these tasks will become increasingly hidden. AI is using more and more models, and sometimes we don’t even realize it anymore…
One of the central issues in the development of AI is that of control. Can we verify and explain how AI works?
This is a very timely topic because we must act responsibly, and this is now required by the European regulation passed this summer. That said, we need to distinguish between different levels of oversight. The challenges and methods will differ, for example, depending on whether we want the end user to be able to understand the AI’s decision, or whether that explainability is intended for experts—such as in a courtroom. I always have a little tune in my head telling me not to ask AI for more than we ask of humans. We shouldn’t expect it to be error-free, not because it’s imperfect, but because life isn’t binary.
People often talk about biases in artificial intelligence results. Can we limit them?
This is most often due to representation bias in the input data. For example, if we want to train an AI to recommend academic tracks to high school students and simply replicate existing statistics, the AI will never—or very rarely—recommend the math track to girls. To correct these biases, we need to augment the data to provide the machine with data where there are as many girls as boys. This is also why France wants to create its own AI models, so as not to be subject to cultural biases that are not our own.
So we can't just feed raw databases into the machine; we have to process them first?
Yes, and that’s really just the tip of the iceberg—it’s work that doesn’t get much attention, but it’s very tedious and meticulous, and it helps make the data as usable as possible. That’s one of the key tasks of the ISDM, which I lead.
What exactly is the role of the ISDM?
The ISDM (Montpellier Institute of Data Science) supports researchers’ activities in the areas of research data management and processing. It provides tools, infrastructure, and training, as well as advice and expertise through its Data Clinic. We also organize the AI Halls, a university initiative designed to bring together stakeholders in the fields of AI and data, and we do a lot of work on data storage and security. In short, we serve as a gateway for anyone wishing to jump on the AI bandwagon—and they really should…
You are also Executive Vice President for Open Science. What are the challenges posed by the development of artificial intelligence?
Open science is the driving force behind this learning, but sharing doesn’t mean an open bar! Making information freely available to everyone is one way to share, but it’s not the only one; we can set conditions and restrictions, and consider how to secure, protect, and promote our scientific heritage.
Is AI indispensable to science today?
Yes, absolutely—AI will transform the role of researchers by enhancing our ability to review the state of the art, verify, organize, or challenge our ideas, and so on. It will also speed up data preprocessing, the management of certain administrative tasks, and our efforts to secure funding…
How is the UM responding to this trend?
Everywhere! The UM is pursuing ambitious projects. The goal is both to empower researchers to master all forms of AI and to demonstrate AI’s potential to adapt and gradually transform our practices. All while operating within the most ethical framework possible, without “leaking” our ideas and without destroying the planet. The UM also strongly supports AI research and the development of new algorithms. Cutting-edge methods are being developed in Montpellier. This is the case, for example, with AI methods in healthcare, such as federated AI, in collaboration withInria. There isn’t a huge gap between application and development; the two feed into each other.
We weren’t selected in the final round of the IA Cluster call for proposals—does this hinder the development of AI in Montpellier?
No, because the AI cluster initiative is moving forward with great momentum. Stakeholders from the socio-economic sector have reaffirmed their commitment to sustaining this momentum. The Metropolitan Area is highly motivated, the Region adopted a robust AI strategy this summer, and our funding partners are on board… The next challenge will be to structure this momentum and keep it going. We are also fortunate to be working with the University Hospital, which is among the most dynamic and advanced institutions in the field of AI and health data processing. We have all the pieces of the puzzle in Montpellier, with a very, very favorable set of stakeholders that I don’t see anywhere else.
UM podcasts are now available on your favorite platform (Spotify, Deezer, Apple Podcasts, Amazon Music, etc.).