To reform universities, first tackle global rankings

The reputation and finances of universities often depend on their position in world league tables. Students use the rankings to quickly determine the best place to study—or at least what any future employer might consider the best. Even a small change in rank can affect the number of students applying to a university, changing tuition fee revenues (R.-D. Baltar etc.. Stud. High. Education. 472323–2335; 2022).

And governments like the simplicity of ratings. Many will finance the education of their citizens abroad only in universities that occupy the top positions on the lists. National investment initiatives, such as Russia's 5-100 Academic Excellence Project and Japan's World's Best University Project, often focus on universities that have a chance of reaching the top echelons of rankings. The UK government only offers individual high potential visas to applicants who have studied at highly ranked universities.

This dependence on rankings means that universities are shaped not by societal needs or innovations coming from within the international higher education community, but by unappointed third-party rating agencies.

Indicators used by some dominant flagship ratings do not reflect the full range of qualities and functions of higher education institutions. Each agency uses a slightly different ranking method, but they all generally focus on a narrow range of criteria. They focus heavily on publication-based measures such as citations and reputation surveys.

The consequence of this is that most of the world's universities tend to strive for one kind of “excellence” that is more like the old, rich, conservative, knowledge-intensive institutions of high-income countries.

Meanwhile, universities encountered a number of problems – from decreasing government funding and trust to the decreasing relevance of training programs in a rapidly changing labor market and the need to demonstrate the real impact of research. There is no shortage of ideas on how to change universities in response to these challengesHowever, the dominance of rankings as a measure of institutional success means that universities lack incentive to try. Many fear that moving away from the status quo could lead to a drop in ratings, making it difficult to attract funding and talent.

Academia and universities must push for change. Here I describe how to do this.

Imperfect system

In my opinion, the leading global rankings are overly reliant on data sources that they have easy access to: publication data or survey data that they collect themselves. ( Nature Indexcreated by Springer Nature, ranks universities based solely on scientific articles published in science and health journals.) Many rankings base teaching evaluations on unreliable metrics such as faculty-student ratios or the number of alumni who have won Nobel Prizes. Most rankings place little or no weight on open science practices, social impact, outreach, or efforts to improve diversity, equity, and inclusion.

Rating indicators also have different weights without clear justification. For example, a ranking might assign 20% weight to faculty citations and only 5% to employment outcomes. The ratings are also presented without error bars, although the data used is imperfect.

Efforts to move away from narrow forms of assessment dominated by publications have largely placed the onus on universities to change how they evaluate their staff and faculty. Many universities have coped with this task. Narrative summaries and biosketches—reports written by researchers that highlight the full range of their contributions—are becoming increasingly common. The development of templates to provide evidence of a wider range of faculty contributions, known as career assessment matrices, is increasing in European universities.

But there is a limit to how far universities can move away from citation- and publication-based assessments if they continue to be judged on these metrics by global university rankings.

A large group of university students in a lecture hall, some raising their hands.

Students use rankings to select universities.1 credit

Over the past three years, several groups, including the Union of Students of Ireland and the UN University think tank, the International Institute of Global Health, have called on universities to break out of this stranglehold. They called on universities to stop providing data for rankings, which some did, such as Utrecht University in the Netherlands and the University of Zurich in Switzerland. And the groups have asked universities to stop increasing their rankings and to reduce the extent to which the ranking of someone's previous institution is taken into account when making decisions such as which staff to hire. Groups also support More than our rating an initiative that encourages institutions to describe their many achievements, activities and aspirations not reflected in the rankings through a descriptive statement on their web pages. (I chair the INORMS Research Assessment Group, which developed the More Than Our Rank initiative.)

These are valid recommendations, but requiring individual universities to take responsibility will not lead to wholesale reforms in how university performance is defined and assessed. To achieve this goal, a three-pronged solution is needed.

Calling up current ratings

The higher education sector must collectively – and openly – agree that the current rankings are not fit for purpose. It may seem unlikely that the institutions currently at the top of the rankings, mostly located in Europe and the US, would name a system that benefits them. But geopolitical changes should give us pause. Chinese and Indian universities occupy more top rankings than before, while universities in the UK, US and Australia are in decline. If those currently at the top wait too long to speak out, they could soon find themselves further down the ranks, with less influence over reforms that benefit all institutions.

The call for change must include an educational campaign targeting students and politicians who rely on rankings to make decisions. This should be led by an independent body led by experts from the international higher education sector, many of whom have already raised concerns about the harms of global university rankings (see, for example, go.nature.com/4hy1kq9). The goal should be to help consumers of rankings understand: “Which university is the best in the world?” this is not a useful question. “Which university is best for me given that I care about X and Y?” This is a better question, but one that current measures are unlikely to provide a good answer to.

The campaign must recognize that good assessments need to be detailed and contextualized and will take time to digest. Just as the “best” researcher cannot be determined by the single number that makes up his hour-index, the “best” university cannot be determined by a single number that makes up its ranking. This message may not be popular, but it is critical.

Collect better data

Leave a Comment