Discussion: How to Rethink Assessment in Higher Education
International university rankings are now a reality of globalization, offering a different perspective than institutions’ historical reputations or the evaluation reports conducted on them. They are based on purely quantitative comparative performance metrics that must be interpreted within their specific contexts: What data is used? What are the indicators? What are the calculation algorithms?
Michel Robert, University of Montpellier

For example, the Shanghai Ranking focuses solely on research, overlooking other core functions of universities: the transmission of knowledge, the awarding of degrees, and career readiness.
While these rankings are important for our international reputation, they often fail to address the information needs of citizens, who—working with limited financial resources—are primarily looking for the best educational options in their local areas for their children. In practical terms, they are more interested in specific undergraduate, technical college, engineering, or master’s programs than in the international recognition of the university itself.
It is also well known that, in France, the term “university” actually refers to classified ecosystems, which very often benefit from the undeniable contributions of research organizations.
Complex environment
Any mention of “evaluation” in higher education and research quickly sparks tensions rooted in our history and practices, particularly regarding issues such as student counseling, selective programs, tuition fees, and academic freedom. It is important to distinguish, in particular, between institutional evaluation—conducted by a peer review committee—and oversight, inspection, or auditing.
The current debates surrounding the multi-year research planning law clearly illustrate the issue of the organization and usefulness of institutional evaluation, which is the focus of this article.
There are many topics for discussion: What is the role of assessment, and what is its purpose? Is it accepted by the communities being assessed? What is its impact? What approaches can be considered to better communicate the results and ensure that they are clearly understandable to all stakeholders, particularly prospective students?
The peer review of a public higher education and research institution (university, college, laboratory, research organization, etc.) involves three key stakeholders:
- the entity being evaluated;
- the expert (peer) committee;
- the accrediting body: Hcéres (High Council for the Evaluation of Research and Higher Education) or Cti (Commission for Engineering Degrees) or other foreign accrediting agencies.
The context in which an evaluation is conducted is complex and involves multiple factors. The relationship between evaluators and those being evaluated must be based on trust and the absence of conflicts of interest. The separation between the evaluation and the decision-making process (awarding a label, allocating resources, etc.) is essential. The current health crisis highlights in particular the importance of scientific integrity issues in research, but also in the training of doctoral students and undergraduates.
Evaluation cannot be aimed solely at sanctioning and regulating the system, as this risks leading to adaptive biases among stakeholders. It must be designed with a threefold objective in mind: to support the development of the entities being evaluated, to assist regulatory authorities in their decision-making, and to inform the public and users of higher education.
Current Issues
The peer review mechanisms established by Hcéres, in accordance with the current law on higher education and research, thus serve to clarify the selected criteria and assess the actual situation (self-evaluation report, indicators, committee visit)—all of which are essential steps in reaching a conclusion (report of the expert committee). These procedures are also part of a quality assurance and continuous improvement process formalized at the European level as a result of the Bologna Process.
The current requirement to evaluate all academic programs and research units nevertheless raises questions about the effectiveness of the evaluation system. Given the burden imposed by this “industrialization” of a very large number of evaluations (several hundred per university every five years), the system does not allow for investigations that could yield greater benefits for a given institution.
Furthermore, the institutional evaluation conducted by Hcéres focuses on about fifty institutions each year, while other institutions—such as private institutions not under contract with the government, or specific institutions like ENA—have never been evaluated by Hcéres.
The definition of the scope of evaluation—that is, the components to be evaluated within a university (degrees, faculties, schools, institutes, academic units, teaching departments, research departments, laboratories, and research teams)—should not be set in stone, as institutional autonomy has led to different organizational models.
It is therefore important to establish a flexible framework that allows institutions, in all their diversity, to express their unique characteristics and strategies, and to avoid forcing them into a stereotype. It is in this sense that the need to update the law becomes clear.
Possible developments
However, academic and educational life, as well as student creativity and success, cannot be reduced to standardized, static indicators or rankings. Taking risks and identifying “weak signals” in innovation, for example, are fundamental to making progress.
How can we develop a performance metric that is not prescriptive, that can be adapted to the diversity of individuals, institutions, and ecosystems involved, and that fosters institutional dynamics? In particular, this involves assessing the strategies institutions use to improve the efficiency of their operations and their performance.
A comprehensive shift in operating practices cannot be reduced to a single initiative by an evaluation agency to compare organizations, especially since, in the past, the rating of laboratories has highlighted the limitations of such an approach (and its rejection), if only because of the limited geographical scope of the comparisons made.
We could therefore consider discussing a more comprehensive approach that involves not only evaluation agencies but also institutions and the relevant government ministries, while ensuring that the process includes the buy-in of the communities served by these institutions. Here are a few ideas:
- in terms of education and student success, distinguishing between the Bachelor’s level (and issues related to the law on student guidance and success) and the Master’s and doctoral levels (and issues related to research), and by utilizing public data certified by institutions regarding student tracking, updated annually at the national level, as the CTI currently does for engineering schools;
- in the area of research, by distinguishing the contribution of laboratories to an institutional strategy, supplemented by national analyses by major disciplinary fields (involvement of the Observatory of Science and Technology, coordinated evaluation of research teams within the same scope, national disciplinary overviews) to assess France’s standing.
To maintain a climate of trust, it is therefore proposed that current evaluation methods be gradually adapted rather than undergoing a sudden and radical overhaul—which carries the risk of rejection—by placing, as is the case in other European countries, the institution at the center of the evaluation process as the primary actor in its own internal and then external evaluation.
These structural and formative reflections are all the more relevant today, as they arise in a context profoundly shaped by the climate transition and the health crisis—a crisis that, by forcing a shift toward a society of physical distancing, will inevitably lead us to change course.![]()
Michel Robert, professor of microelectronics, University of Montpellier
This article is republished from The Conversation under a Creative Commons license. Readthe original article.