Debate: How to rethink assessment in higher education
International university rankings are now a reality of globalization, providing a different perspective from the historical reputation of institutions or the evaluation reports produced about them. They are based on purely quantitative comparative performance data that must be interpreted within their scope: what data is used? What are the indicators? What are the calculation algorithms?
Michel Robert, University of Montpellier

For example, the Shanghai ranking only covers the field of research, ignoring other fundamental missions of universities: the transmission of knowledge, the awarding of degrees, and professional integration.
Important for our international reputation, these rankings are often out of step with the information needs of citizens, who, with limited financial resources, are primarily looking for the best solutions in their local areas for their children's education. In practical terms, they are more interested in specific bachelor's, technical college, engineering, or master's degree programs than in the international recognition of the university.
It is also well known that in France, the term "university" actually refers to classified ecosystems, often with an undeniable contribution from research organizations.
Complex environment
Any mention of "evaluation" in higher education and research quickly leads to tensions linked to our history and practices, on subjects such as student guidance, selective courses, tuition fees, and academic freedom. A distinction should be made between institutional expertise, carried out by a peer committee, and control, inspection, or auditing.
The current debates on the multi-year research programming law clearly illustrate the issue of the organization and usefulness of institutional evaluation, which is the focus of this article.
There are many topics for discussion: what is the role of evaluation, and how useful is it? Is it accepted by the communities being evaluated? What impact does it have? What practices can be considered to better communicate the results and ensure that they are perfectly clear to all stakeholders, particularly future students?
Peer review of a higher education and public research institution (university, school, laboratory, research organization, etc.) involves three parties:
- the entity being evaluated;
- the committee of experts (peers);
- The organizer: Hcéres (High Council for Evaluation of Research and Higher Education) or Cti (Commission for Engineering Degrees) or other foreign evaluation agencies.
The context for organizing an evaluation is complex and involves several variables. The relationship between evaluators and those being evaluated must be based on trust and the absence of conflicts of interest. The distance between the evaluation and the decision-making process (awarding a label, allocating resources, etc.) is essential. The current health crisis highlights the importance of scientific integrity issues in research, but also in the training of doctoral students and undergraduates.
The sole purpose of evaluation cannot be to sanction and regulate the system, at the risk of leading to adaptation biases among stakeholders. It must be designed with a threefold perspective: to assist in the development of the entities being evaluated, to aid in the decision-making process of supervisory authorities, and to inform the public and users of higher education.
Current issues
The peer review mechanisms established by Hcéres, in accordance with current legislation on higher education and research, thus serve to clarify the criteria used and observe reality (self-assessment report, indicators, committee visit), all of which are essential steps in forming a judgment (expert committee report). These procedures are also part of a quality and continuous improvement approach, formalized at the European level as a result of the Bologna Process.
The current requirement to evaluate all training programs and research units nevertheless raises questions about the effectiveness of the evaluation system. Given the burden imposed by this "industrialization" of a very large number of expert assessments (several hundred for a university every five years), the system does not allow for investigations that could yield greater added value for a given institution.
Furthermore, the institutional evaluation conducted by Hcéres focuses on around fifty institutions each year, while other institutions, such as private institutions that do not have a contract with the state, or special institutions such as ENA, have never been evaluated by Hcéres.
The definition of the evaluation unit, i.e., the components to be evaluated within a university (degrees, faculties, schools, institutes, departments, teaching departments, research departments, laboratories, research teams) should not be fixed, as the autonomy of institutions has led to different organizational models.
It is therefore necessary to define a flexible framework that allows the diversity of institutions to express their specific characteristics and strategies, and avoid forcing them into a stereotype. It is in this sense that the issue of updating the law is essential.
Possible developments
However, scientific and educational life, creativity, and student success cannot be limited to standardized and fixed indicators or rankings. Risk-taking and detecting "weak signals" in innovation, for example, are fundamental challenges for progress.
How can we develop a performance measurement system that is not prescriptive, that can be adapted to the diversity of individuals, institutions, and ecosystems, and that stimulates institutional dynamics? In particular, we need to assess the levers used by institutions to improve the efficiency of their actions and performance.
A global change in operating methods cannot be reduced to an isolated action by an evaluation agency to compare entities, especially since, in the past, the rating of laboratories has highlighted the limitations of such an approach (and its rejection), if only because of the limited territorial scope of the comparisons made.
We could therefore consider discussing a more comprehensive approach involving not only evaluation agencies, but also institutions and supervisory ministries, integrating acceptance by the communities concerned in the institutions into the process. Let us consider a few possibilities:
- in terms of training and student success, distinguishing between bachelor's degrees (and issues related to the law on student guidance and success) and master's and doctoral degrees (and issues related to research), and by using public data certified by institutions on student tracking, updated annually at the national level, as currently practiced by the CTI for engineering schools;
- In terms of research, by distinguishing the contribution of laboratories to an institutional strategy, supplemented by national analyses by major disciplinary fields (involvement of the Observatory of Science and Technology, coordinated evaluation of research teams within the same scope, national disciplinary syntheses) to analyze France's position.
To maintain a climate of trust, it is therefore proposed that the current evaluation methods be gradually modified rather than undergoing a sudden and radical transformation, which could lead to rejection. As can be seen in other European countries, the institution should now be placed at the center of the evaluation process as the main actor in its own internal and then external evaluation.
These structural and structuring considerations are all the more relevant today as they take place in a context profoundly impacted by climate change and the health crisis, which, by forcing us to shift to a society of physical distancing, will inevitably lead us to change course.![]()
Michel Robert, professor of microelectronics, University of Montpellier
This article is republished from The Conversation under a Creative Commons license. Readthe original article.