Back

Are university rankings true, fair, or helpful?

For Elizabeth Gadd, it is time for the higher education sector to think carefully about the effects of global university rankings. To explain why, she responds to three simple questions.

The perverse effects of university rankings

A triopoly of university rankings, the QS World University Rankings, Times Higher Education World University Rankings and Shanghai Academic Ranking of World Universities, are growing in power and influence. As a consequence, governments invest in 'excellence initiatives', seeking not to improve on educational standards or research quality, but to increase their chances of getting some universities into the upper echelons of the rankings. Some countries have merged smaller institutions to improve their chances of getting into the coveted Top 100. And the UK government recently announced it would use the rankings to identify 'high potential individuals' for fast-track visas. In this context, it is hardly surprisingly that students choose to study at highly-ranked institutions. This in turn affects universities' financial stability and leads some institutions to seek any means - legitimate or illegitimate - to climb the rankings.

Institutions that complain about this situation are often accused of eschewing accountability. However, I think the more pertinent question is to whom should institutions be accountable? Is it to the unappointed and unaccountable global university rankings? I would argue not. And given their status as truth-seekers, I would suggest it is universities who should be holding the rankings to account. They can do so by asking three simple questions: are rankings true, are they fair, and are they helpful?

Are they true?

For a sector in pursuit of more responsible research assessment, universities are often criticised of turning a blind eye to evaluation approaches that would not meet their own standards of scientific rigour. University rankings are an excellent example of this.

The ‘research question’ posed by the rankings - namely, “which is the best university in the world?” - is poorly defined.  Best at what? The characteristics of a 'top' university are not universally agreed, and neither are the weights that should be assigned to each one. Furthermore, the indicators then used to assess those characteristics, e.g., “alumni with Nobel prizes”, fall woefully short. The resulting data is therefore of very poor quality and the outcomes are presented without error bars. Can we confidently say the rankings are true? A categorical no.

Are they fair?

If a ranking is not true it is by definition unfair, as it will favour the wrong institutions. However, the inequities of the rankings run deeper than that.  Research has shown that the winners at university rankings are always the old, large, wealthy, research-intensive, science-focused, English-speaking universities in the Global North. The heavy use of bibliometric data in the rankings exacerbates this as most journals indexed (over 80% in Scopus, an abstract and citation database) are based in the Global North. Unfortunately, when the rankings are then used for decision-making, such as the UK's High Potential Individual visa scheme, whole continents such as Latin America and Africa are excluded from opportunities.

However, it is not only that certain groups are systematically excluded but that some get more help than others to succeed. This is because rankings do not only rank institutions. They also offer (suspiciously successful) consultancy, data and other services to help them improve their ranking. This represents a significant conflict of interests. There even exists an exclusive World 100 Reputation Network (conspicuously co-located with the Times Higher Education offices) open only to those already in the top 100 willing to pay the annual €8000 fee. The sole purpose of this network is to maintain members’ position in the rankings and, by extension, keep others out.

Are they helpful?

If a ranking is neither true nor fair, it is extremely unlikely to be helpful. However, a frequent counterargument of ranking agencies is that they offer an invaluable service to students who have no other means of discerning between institutions. Given that the rankings fail to assess educational quality, their helpfulness feels doubtful. Students’ use of the rankings might rather be explained by the streetlight effect: they seek answers in the rankings, not because the answers are there, but because this is where the only data is found.

Of course, another interpretation is that students are not seeking the best place to study at all, but the best place to put on their CV. As Colin Diver writes, “Post-secondary education has become a competition for prestige. And rankings… have become the primary signifiers.” In this way rankings are to universities what journal impact factors are to journals and what h-indices are to faculty: false signifiers of merit. However, we know that with constant use they can ultimately become self-fulfilling prophecies.

Time for action

I believe it is time for the higher education sector to think carefully about the effects of the ‘big three’ university rankings. Are they encouraging the diversity of organisations required to meet the many and varied needs of society, or are they leading to greater homogeneity? Are they inspiring winners, or making losers out of everyone? Are they encouraging the collaborations we need to solve the world’s grand challenges, or are they making competitors out of us all?

Perhaps a better question is: who stands most to benefit from the existence of university rankings? Here the answer is clear: the winners are the university rankings themselves.

So, what can institutions do? Three things. Firstly, sign the Agreement on Reforming Research Assessment and join the Coalition for Advancing Research Assessment (CoARA). One of the agreement’s ten core commitments is to avoid the use of rankings in the assessment of researchers. Secondly, sign up to the INORMS Research Evaluation Group’s More Than Our Rank initiative, which enables universities to describe - in a qualitative way - how much they have to offer the world, far beyond what is currently captured by the rankings. Thirdly, they should demand greater accountability from the rankers themselves, insisting on greater transparency, labelling, and health warnings around the use of their ranking products.

The vitality of our sector depends on it.

Note: This article is based on the author’s contribution to the session “Can rankings support universities' quest for excellence?” at the 2023 EUA Annual Conference.

“Expert Voices” is an online platform featuring original commentary and analysis on the higher education and research sector in Europe. It offers EUA experts, members and partners the opportunity to share their expertise and perspectives in an interactive and flexible exchange on key topics in the field.

All views expressed in these articles are those of the authors and do not necessarily reflect those of EUA.

Elizabeth Gadd

Elizabeth (Lizzie) Gadd is Research Policy Manager at Loughborough University, UK. She also chairs the International Network of Research Management Societies (INORMS) Research Evaluation Group and serves as a Vice-Chair of the Coalition for Advancing Research Assessment (CoARA).

Search

Comfortable read mode Normal mode X