
It is beyond doubt that rankings have become a significant part of the tertiary education landscape, both globally and locally.
In this landscape, rankings have risen in importance and proliferated in unimaginable ways. It’s become commercialized and, with it, so has the sophistication of companies and organizations that rank colleges and universities. Undoubtedly, rankings now play such a big role in shaping the opinions of current and potential students, parents, employers, and government about the quality of tertiary education institutions.
The emergence of this rankings obsession is, at the same time, a legitimate source of concern about its misuse, especially when it is used solely for promotional purposes, or, even worse, when it becomes the main driver of policy decisions for governments and tertiary education institutions. Nowadays, it is common to observe entire policies and programs from governments apparently more concerned with the position in the rankings than on the relevance of their tertiary education institutions. Sometimes, this results in diverting significant amount of resources to some institutions while limiting support for others. If rankings become the end rather than the means towards better tertiary education, then this should be a matter of concern. An excessive importance given by institutional and government decision-makers on rankings may be both disturbing and alarming.
It is evident that rankings do have a value as a reference and as basis for comparison. However, they do not always serve as the best proxy of quality and relevance of tertiary education institutions. Let’s keep in mind that any ranking is eventually an arbitrary arrangement of indicators aimed at labeling what it is being pre-defined by the ranker as a “good” educational institution.
Those in favor of rankings –especially rankers- may argue that in the absence of sound and comparable information, rankings are the best option for determining the quality of colleges and universities. However, as the saying goes, the devil is in the detail. This pre-defined vision of an ideal institution does not always take significant contextual differences into consideration. It tends to impose a one-sided vision of an institution –mostly a traditional research oriented and highly selective university- which is not necessarily the most responsive to the varied needs of the communities where these institutions are located.
Most well-known rankings tend to equate institutional quality with research productivity, which is measured either by the number and impact of their publications in peer-reviewed journals, or the selectivity on their admission processes. Of course, such a proxy of quality downgrades institutions that place greater emphasis on teaching and prolongs the “publish or perish” principle. In pursuing better position in the rankings, most probably internal and external funding may tend to favor academic programs or research units that are more inclined to get involved in the dynamics of researching and publishing. Finally, it diminishes the role of other important functions of tertiary education institutions such as teaching and public service.
Another dimension of rankings intends to measure “reputation” by gathering opinions (which are unfortunately not always competent and objective) from employers, field experts and/or alumni. Quite expectedly, people tend to favor certain institutions regardless of the quality of their academic programs just because the fame or recognition that precedes them. As such, other institutions and programs that may not have a famous name, but are providing meaningful contributions to the society by producing the graduates required for their local and regional economy, fall by the wayside.
This also means that an institution which is not highly selective and tends to serve students with lower socio-economic-academic backgrounds, is most likely to be left out of the rankings even though the “value added” that it provides to its students may be proportionally higher than that of one of those highly selective institutions that have already had the chance to attract better-off students.
Similarly, the appropriateness of measuring the reputation of a tertiary education institution by its alumni’s job profile is not exempt from criticism. Jenny Martin, a biology professor at the University of Queensland in Australia, puts it very well: “International rankings are meant to identify the best workplaces, yet none of the rankings evaluate important indicators like job satisfaction, work-life balance, and equal opportunity”.
An alternative approach that is being explored by a number of tertiary education systems is aimed at fostering institutions to “benchmark” with peers in a less disruptive and more proactive way than the rankings. The benchmarking approach allows for a meaningful comparison of institutions that is based on their own needs. It includes some elements that are already incorporated in rankings but allows institutions to customize comparisons based on their performance vis-à-vis the best, average, or lowest performing institutions of its type. This approach makes possible for institutions to define their own niche, and reduces the pressure to blindly follow a unilateral definition of a “good institution.”
A good example is the University Governance Screening Card Project that brings together more than 100 tertiary education institutions from seven countries in the Middle East and North Africa (MENA) region. Sponsored by the World Bank and the Centre for Mediterranean Integration, this initiative is aimed at enhancing institutional governance and accountability through capacity building measures based on an evidence-based and inclusive approach.
Participating institutions can benchmark with peers on matters related to governance, quality and management. A number of them have developed detailed actions plans and related capacity-building measures in order to improve their performance. Similar initiatives are being established in other countries as well, including benchmarking projects in Africa, and India.
It would be naive to assume that rankings will lose their importance in the future. However, while recognizing that they are here to stay, we must be aware of their many limitations, their intended and unintended biases, and their convenience-based usage by institutions and even national governments.
This post originally appeared in Higher Education in Russia and Beyond.
Get more information about the World Bank’s work on tertiary education here.
Find out more about World Bank Group education on Twitter and Flipboard.
Thank you for this critical and insightful essay. I hope the University Governance Screening Card project can be rolled out worldwide. Becoming a good university starts with fixing the governance and having accountable management. This requires external assessors to have a close and thorough look at a university. For many universities in developing countries both steps - good governance, and external assessment - are still a real challenge.
Dear Albert:
Based on the successful implementation experience in the 7 countries in the Middle East and North Africa (MENA) region, now we are planning to make it available in other countries. So far, institutions from different parts of the world have expressed interest in participating. I fully agree that sound governance is a fundamental ingredient for effective institutions and tertiary education systems.
Social Engagement methodology - Francisco the issues you raise are highly pertinent to HE institutions in emerging economies who often lack any power to influence the ranking criteria. Furthermore, there needs to be more work done on an acceptable methodology to include an institution's social impact and social engagement
Dear Peter:
Thanks for your comment. During a dialogue held at a recent conference in Shanghai, representatives from ranking companies were asked about the need to include the social impact of institutions as part of their criteria to weight relevance. Their general response was that it would be good, but they argued that such impact is difficult to measure. Not too much hope, I guess.
This confirms the need to explore alternative ways.
Congrats on a very insightful and critical view on ranking. Ranking does serve its purpose in making comparison between some universities but the in many instances the ruler or indicators used are questionable. The other downside as rightly mentioned in the article is the diminished importance equated to other domains in the education system.
Thanks for your kind comments.
I completely agree with Francisco Marmolejo.
Rankings that are not based on the "value added" principle penalize the less mobile and more talented students coming from poor educational backgrounds. They tend to polarize the educational system. This happens even if they are not used to allocate public funds due to their influence on families' choices.
Francesco Ferrante, 2014. "Assessing quality in Higher Education: some caveats," MPRA Paper 62450, University Library of Munich, Germany.
Thanks, Francesco, for sharing information about your paper. I had the opportunity to read it online; it provides interesting evidence. As you indicate on your paper:
"Failing to take account of the incoming quality of students may give rise to significant distortions in the evaluation of the academic productivity of universities". Such deficiency in properly assessing quality and readiness of incoming students, and the corresponding related measurement during and after termination of studies is highly important. Some countries have developed interesting approaches such as the of Colombia.
Francisco the points you raise are especially pertinent to HE institutions in emerging economies who often have little power to influence the ranking criteria. Secondly, the aspect of social engagement or community impact of HE is difficult to measure and there therefore needs to be more emphasis on developing an acceptable methodology to address this limitation.
Dear Peter:
Thanks for your comment.
I agree that measuring social engagement is difficult. Some encouraging interesting methodological approaches have been developed which can serve as a basis for further development and consensus.
Regards,
Francisco Marmolejo
Well said Francisco, I love it . Yes, one important factor that is usually overlooked in these ranking, or doesn't get the weight it should, is public service. Another improvement factor, I think important specially in developing countries (well actually almost every where)...is how much the private sector invests in these universities mainly in RD, scholarships, project sponsorships, etc.
This is such an important topic - thank you Francisco!
We have been trying to move away from rankings in the Canadian post-secondary landscape and we see this is the new Maclean's College Guide - a guide which features no rankings!
http://www.macleans.ca/education/college/why-colleges-are-increasingly-…
It is encouraging to see the work done by "Colleges and Institutes Canada" (http://accc.ca/). Congratulations.
Francisco Marmolejo
It would be interesting to know as to how the ranking organisations generate revenue. Advertisements by universities may be playing a role, especially those whose ranking has improved. Let details be told. Further, which countries bother too much about these global rankings? I guess it is more of the developing world. Some Governments may unfortunately be taking this into account while setting research agenda and funding which may not be solving much of their own problems.
Surprisingly the interest is existing in both developing and developed countries. In both cases, some policy decisions made by governments tend to become influenced by the positioning of their institutions in the rankings.
Francisco Marmolejo
The article is based on facts.Each action has opposite side of reaction.Yes,we are getting obsessed by the ranking.I think World Bank has done a good job for financing enhancement of education at tertiary level in Bangladesh.Self-assessment program is a very important to improve quality of education in Bangladesh.After finishing the project ,I think internal quality enhancement at different Higher educational institutes will be strengthen by this project. However, external assessment is also being required.As such accreditation council may play positive role if it works independently,transparently and fairly as an external assessor.At the same time bench marking of international standard is very important.If we do the SWOT analysis and go for confrontation matrix,we shall see that despite some lacking ,if ranking is impartially done it may not satisfy Pareto-optimal but will satisfy Theory of second best.There is no other alternative of Quality Higher education.
Ranking, though, has some shortcomings particularly in terms of relevance to culture and developmental levels of post secondary institutions, it remains quite useful as a starting point for the development of HEIs in developing countries where education is not placed in the front burner of economic development process.
Francisco: As always, you offer thoughtful analysis of an important dimension of & challenge to the higher education community. One of the vexing challenges in improving on the existing ranking metrics is how to measure the societal, not just scholarly, impact of higher education institutions. The scholars at Shanghai Jiao Tong University who introduced the Academic Rankings of World Universities significantly influenced how we think about the scholarly impact of research universities. We have long needed another group of scholars, perhaps at a more applied institution, to develop some reliable & readily obtainable measures of societal impact (e.g., improvements relative to the Sustainable Development Goals). Public institutions in particular can benefit from demonstrating & being acknowledged for their positive impact on the public good.
Dear Francisco
Purpose of a university and its governance
It is well established that universities are founded with the main purpose of advancing and generating knowledge. Gaining knowledge improves an individual’s economic and social status, while at the national level, talents with multitude of skills help in nation building. The contribution of universities goes beyond communities, industries and the political arena, especially in this era of globalisation. With globalisation, university graduates are global citizens and expected to contribute to global interests.
In most instances, the role of the central agency is to provide funding for universities to achieve their objectives. In this regard, funding percentages vary; some public universities may receive up to 90% of their annual budget from the central agency while others receive less than 25%. Universities in most nations are autonomous and the governance is left to respective governing councils/boards. However, central agencies and stakeholders (e.g. the tax payers) would normally expect returns of investment from the universities and the indicators could be in many forms and results. These indicators are not absolutely quantifiable, such as community service and development, and may not be part of the criteria of ranking systems.
To be ranked or not to be ranked ?
With more than 30,000 universities in the world and increasing access and equality to higher education, universities compete for quality students, excellent faculty and funding for both teaching and research. This competition is often portrayed and ‘determined’ through numerous ranking and rating systems. The decision to enter a ranking or rating system is fundamentally a conscious decision made by universities as they should know whether they are ready to be ranked and whether such is important to their core business and aim.
Some universities may decide to focus on the teaching and learning aspects, and it may not be in their best interest to be ranked (as research, publications, and commercialisation tends to feature more prominently than teaching experiences in most ranking and rating systems). On the other hand, some universities may decide to be ranked because they are confident that they have made significant contributions to their communities, industries and nation building, and in fields of research, publications and commercialisation. In some countries, central agencies administer their own rating system instead of ranking to allow them to gauge the output of their local universities. There are even ranking systems, such as the Universitas 21, that ranks national higher education systems as a whole (as opposed to individual universities as is commonly done) and measure central agency input, collective university output and how well the system facilitates knowledge-transfer with industry.
How important are rankings?
In essence, to be ranked is important but it is not the be all and end all. It is accepted that rankings are valuable in order to benchmark against other institutions, nationally and internationally, as well as to enable stakeholders to make informed choices relating to educational pursuits, research, collaborations and even funding. The pursuit of rankings, however, must be done keeping in mind that (i) ranking criteria aren’t exhaustive (and may miss out on softer indicators such as local community well-being), and (ii) must be balanced with the purpose of a university’s establishment, including the producing of graduates who do not just excel academically but also spiritually with a sense of national identity (priorities of which vary from country to country).
This would also mean that rankings are not for all the universities in the world as the ranking criteria varies and might not be in line with a university’s vision and mission. However, universities cannot ignore rankings in totality. Good ranking achievements are the outcome of good governance, research, teaching, relationship with the industry and many more contributing factors. Having a clear vision, mission and strategies are important at both university and central agency levels to ensure universities remain relevant and competitive while ensuring that it stays true to its purpose of establishment.
Thanks for your thoughtful analysis.
Regards,
Francisco Marmolejo
A nice and sensible write up on an important topic.
I will make a few points:
1. The obsession with rankings needs to be moderated for all rankings and a good place to begin with would be to look within. For an institution which owns and publishes the DB rankings every year, we need to acknowledge that certain countries go all out to improve their rankings without much though to the real change achieved. We need to be careful of such efforts where the rankings 'become the end'.
2. Research quality at any institution or company is fundamentally suspect. Till we don't start identifying the funding source for each and every research effort, and identify possible conflicts of interest, no ranking which relies on research will be truly objective.
3. Doctoring grades: It has ben documented that the Ivy League universities in the US give better grades to their students than other universities. This behaviour should be penalized rather than rewarded through higher rankings.
Finally, why are rankings important for universities. Most universities are regional in character except universities in city states such as Singapore or UAE etc. Word of mouth from the hinterland works much better for universities than global ranking lists where the ranking team may know nothing about the region and its universities.
a critical and yet an insightful essay rankings are not for all the universities in the world as the ranking criteria varies and might not be in line with a university’s vision and mission. However, universities cannot ignore rankings in totality. Good ranking achievements are the outcome of good governance, research, teaching, relationship with the industry and many more contributing factors.http://education.uonbi.ac.ke