The Role & Rule of Rankings
This essay explores the impact of global university rankings on higher education, with a focus on their historical evolution, limitations, and flaws. I examine the detrimental consequences associated with manipulating the ranking systems, and their resultant financial repercussions, which lead to diminished trust in higher-education institutions. I call for a comprehensive evaluation, urging stakeholders—especially governments—to recognize the subjectivity and limitations inherent in rankings that inform policymaking decisions related to higher education. I propose strategies for improvement, such as broadening the criteria used for rankings, and specialized rankings that highlight the unique strengths of various types of institutions, like public engagement, student satisfaction, diversity, and sustainability. Collaboration could enhance ranking accuracy, while also acknowledging the significance of ranking systems in shaping higher-education decisions and policies.
Individuals and organizations use university rankings for various purposes. Prospective students and their parents often use them to determine which university to attend, while higher-education institutions use them as a benchmarking tool to evaluate their relative performance in comparison to other colleges and universities. Employers may use university rankings to identify top institutions or departments for recruitment purposes.1 For media outlets, university rankings generate interest and increase readership. Government officials use university rankings to inform policymaking decisions related to higher education. Finally, there are those who watch them as a spectator sport.
These rankings, however, have several fundamental flaws and limitations that make them an unreliable and subjective tool for evaluating universities. This is a consequence of their methodologies, in which narrow and quantifiable metrics, such as research output and reputation surveys, are emphasized while other criteria like teaching quality are often disregarded.2 The ranking often fails to accurately reflect the quality and diversity of a university’s programs, faculty, and students.
As a result, rankings can perpetuate unequal distribution of resources and opportunities as prestigious and large institutions with greater resources often perform better in the rankings than newer or underfunded institutions. At the same time, rankings might also inflate the quality of a university’s program.3 When the rankings are used to allocate funds or create programs—or cut existing programs and defund certain disciplines—significant issues emerge.
On the one hand, policymakers may find rankings useful to identify areas of strength and weakness in a country’s higher-education system, which can inform resource allocation and policy decisions to improve competitiveness. On the other hand, rankings may be problematic if they are given too much weight or are not based on comprehensive and diverse criteria. Emphasizing research output or reputation may overlook important objectives like access and affordability, leading universities to prioritize the former metrics over the latter.
To enhance the quality and reputation of their institutions and programs, countries often review and modify their higher-education policies, and international rankings play a significant role in shaping these policies worldwide. For example, Japan has implemented various initiatives, including their “Top Global University Project” in 2014, to support the development of world-class research universities and increase the international competitiveness of Japanese universities. In order to improve its performance in international university rankings, France has introduced a performance-based funding system and a national strategy for research and higher education, including their Investments for the Future program. Germany’s Excellence Initiative and Russia’s Project 5–100 promote the development of world-class research and higher-education institutions in their countries.
China is also among these ambitious countries. A global superpower and ever-rising contestant to the United States in many areas, Chinese universities’ rise to global prominence was no simple task, yet it was achieved in a spectacular fashion. When it comes to rankings, the rise of Chinese universities is, relatively speaking, a recent trend. It reflects the rapid development and growth of the Chinese higher-education system that shows the enterprising aim of the Chinese government, which, over the past decade, has made significant investments in research and development. China has increased efforts for international collaborations and recruitment of top faculty, which led to a significant increase in the number of publications and patents. Naturally, these rapid improvements have translated into an impressive (and perhaps unprecedented) performance in international university rankings. China is among the top performing countries, having institutions placed in the top 100 universities by every major ranking table (and five universities in the top 50 of the QS World University Rankings, one of the most well-known global ranking lists). Their positions in the rankings demonstrate the country’s growing influence and competitiveness in the global higher-education landscape.
Conversely, in recent years, North American and European universities have seen a decline in dominance in global higher-education rankings due to such factors as the rise of new economic powers in Asia and increased investment in higher education by other regions of the world. While the U.S. and European universities may have been slow in responding to this competition, there is no evidence to suggest they did not take the challenges seriously. The emergence of dynamic economic forces in Asia and increased investment in higher education by governments around the world have contributed to the decline of U.S. and European dominance in global rankings for higher education.
The phenomenon of university rankings is not without controversy. Some scholars argue that the university rankings are oversimplified, aiming to measure the values that cannot be quantified, and do not (and cannot) accurately reflect the quality of an institution.4 Overprioritizing rankings can also create pressure to focus on factors solely emphasized by the rankings themselves rather than those that measure the quality of education provided to students. There are also examples of universities manipulating the rankings to advance their positions. Different countries use international university rankings to inform their higher-education policies and set goals for improvement. It is crucial for government officials (and their related organizations, institutions, and departments) to consider the limitations and subjectivity of rankings when using rankings to guide policymaking.
A clear example of the limitations of using university rankings to inform policy is evident in the context of immigration. Some governments use university rankings to determine eligibility for visas and residence permits. For example, the Dutch government only recognizes schools listed in three major international rankings tables for its “highly skilled migrant visa,” and the United Kingdom offers a visa for graduates of universities that ranked within the top 50 positions on two or more international rankings lists. These rankings, however, are updated annually, making it uncertain if a university will be eligible in subsequent years. For instance, alumni from the University of Wisconsin–Madison were eligible for a visa in 2020 but not in 2022. This shows how rankings can be useful but unreliable, and that other sources of information should supplement their use in decision-making. Understanding the history and development of university rankings can provide insight into their current significance and future trajectory.
The first example of university rankings can be traced back to psychologist James McKeen Cattell (1860–1944), a professor at Columbia University. In 1910, Cattell published a list of institutions based on the number of eminent “men of science” (in the German sense, Wissenschaftler), a term that referred to faculty who actually conducted research. He measured only the quantity of faculty, not the quality of research. His list included the following institutions: 1) Harvard, 2) Chicago, 3) Columbia, 4) Yale, 5) Cornell, 6) Johns Hopkins, 7) Wisconsin, 8) U.S. Geological Survey, 9) Department of Agriculture, 10) MIT, 11) Michigan, 12) California, 13) Carnegie Institute, 14) Princeton, 15) Stanford, 16) Smithsonian, 17) Illinois, 18) Pennsylvania, 19) Bureau of Standards, and 20) Missouri.5
Harvard had the largest faculty at the time, so it was ranked as the top institution since Cattell did not consider the quality or distinction of the faculty. If such factors had been taken into account, other universities may have been ranked differently. Clark University, for example, which was known for its highly distinguished faculty, could have been at or near the top of the list. This illustrates the limitations of Cattell’s rankings, which were more focused on quantity over quality.
Despite Cattell’s priority, many scholars consider chemist Raymond Mollyneaux Hughes’s rankings list in 1925 as the first “proper” example of university rankings. Following a more comprehensive methodology than Cattell, Hughes based his rankings on peer surveys, and measured the reputation of individual departments within universities rather than ranking the universities as a whole.6
The landscape today is quite different. In the past, the peer reputation or number of distinguished faculty members at an institution was deemed sufficient to rank the institution. Currently, there are thousands of institutions, in hundreds of different systems, catering to millions of students. It is impossible to rank them properly—certainly not simply by counting the number of esteemed faculty members or by relying on faculty perceptions. The problem, however, is that the public still wants to know which university is “better,” despite the fact that universities serve a diverse body of students with a variety of interests. It is not surprising that institutions regularly update their websites or promote on social media their position in the latest rankings—in fact, though hardly surprising, they often advertise their position on lists where they are ranked the highest.
For prospective students, or perhaps scholars, seeing where a certain institution is ranked might be important. Rankings provide a straightforward list, purporting to identify the best institutions using a range of metrics through a form that is often more digestible than complex reports for readers who are already educated on a given subject. This simplifies decision-making for all stakeholders, as the ranking order in any given list is always clearly defined.
The importance of international university rankings lies in their capacity to compare schools across different countries, resulting in a clear and straightforward list of institutions. Ideally, rankings would foster the exchange of best practices, but in reality, they establish a hazardous playing field in which elite institutions are privileged.
There are numerous university rankings published by various organizations all around the world, each using its own methodology and criteria. There are, however, three widely recognized major international rankings: the Times Higher Education World University Rankings (THE), the QS World University Rankings (QS), and the Academic Ranking of World Universities (ARWU), otherwise known as Shanghai Rankings.7 Widely followed internationally, these three rankings have a significant impact on a university’s reputation and the decisions of students globally. One notable exception may be students in the United States. As a higher-education powerhouse, and the leading country in pretty much every international ranking, the United States has its own prominent university ranking list, the U.S. News & World Report Best Colleges Rankings (U.S. News).8
As a result of the different methodologies used by each ranking, there are clear differences in their respective outcomes. While the top 20 institutions are more or less the same in each table, with relatively small variations, the disparities become increasingly pronounced beyond the top 50. For example, in 2022, the University of Minnesota, my alma mater, ranked 44th by ARWU, 86th by THE, and 186th by QS! This drastic discrepancy, from 44th in the world to 186th, illustrates the impact of the specific criteria and methodologies used by each ranking system.
What methodologies do these tables use? At the time of writing this essay, THE evaluates a university based on thirteen performance indicators that measure a university’s research productivity, teaching, citations, international outlook, and industry income. It is important to note that the methodology of THE has been significantly updated for its 2024 lists to ensure it accurately represents the outputs of the diverse range of research-intensive universities worldwide, both presently and in the future.9 QS determines its world rankings based on six performance indicators: academic reputation (40 percent), citations per faculty (20 percent), faculty-student ratio (20 percent), employer reputation (10 percent), international student ratio (5 percent), and international faculty ratio (5 percent). Much like THE, QS has introduced more transparency for its 2024 rankings, implementing its largest methodological enhancement so far, introducing three new metrics: sustainability, employment outcomes, and international research network.10 ARWU evaluates universities based on six performance indicators that are grouped into four categories: quality of education, quality of faculty, research performance, and per capita performance.
In the United States, U.S. News evaluates universities based on seventeen key measures across the following categories: graduation and retention rates, social mobility, graduation rate performance, undergraduate academic reputation, faculty resources, student selectivity, financial resources per student, average alumni giving rate, and graduate indebtedness. The weight of each indicator varies, with graduation and retention rates receiving the highest weight at 22 percent and alumni giving rate receiving the lowest weight at 3 percent. It is important to recognize that the categories used in these rankings are self-reported, which means the institutions provide the data that the ranking organization uses to assign their positions on the list. In another significant update, the latest iteration of U.S. News has introduced new metrics encompassing measures of first-generation college student success, postgraduation earnings compared to those of high school graduates, and a heightened emphasis on graduation rates among students receiving federal Pell Grants. It has also eliminated five metrics from its methodology, including class sizes and alumni giving, while preserving others like the peer survey.11
More rankings systems are available to stakeholders, some of which rank institutions as a whole, while others focus on specific areas. For example, the National Taiwan University (NTU) World University Rankings sort universities based on their position in the “Performance Ranking of Scientific Papers for World Universities,” which evaluates productivity, impact, and excellence in research. In 2023, NTU listed the top ten universities as: 1) Harvard, 2) Stanford, 3) University College London, 4) University of Oxford, 5) University of Toronto, 6) Johns Hopkins, 7) University of Washington, Seattle, 8) MIT, 9) University of Cambridge, and 10) University of Michigan, Ann Arbor.
Similarly, University Ranking by Academic Performance (URAP), produced by the Middle East Technical University in Türkiye, ranks universities based on their performance in research and academic productivity. Their top ten universities in 2023 were: 1) Harvard, 2) University of Toronto, 3) University College London, 4) University of Oxford, 5) Tsinghua University, 6) Stanford, 7) Zhejiang University, 8) Université Paris Cité, 9) Shanghai Jiao Tong University, and 10) Johns Hopkins.
The Leiden Rankings in the Netherlands focus on the scientific impact of universities as measured by bibliometric indicators, such as the number of publications, citations, and collaboration networks.12 U-Multirank, produced by the European Commission and several European higher-education associations, allows users to compare universities on a variety of indicators, including teaching, research, and international orientation.13 Universitas Indonesia’s GreenMetric ranking, in operation since 2010, measures the environmental sustainability performance of universities around the world.14 Webometrics, published by the Spanish National Research Council, ranks universities based on their online presence and impact.15 The Washington Monthly College Rankings evaluate colleges in the United States based on their contribution to the public good in three areas: producing research, promoting social mobility, and encouraging public service.16
The SCImago Institutions Rankings (SIR) rate academic and research institutions based on their research performance, innovation outputs, and societal impact.17 SIR groups institutions by country and sector, and their ranking is based on a five-year period. Their list includes various indicators such as normalized impact, excellence with leadership, output, scientific leadership, international collaboration, patents, and societal impact. As it also includes companies and government institutions, it is not surprising to see a list that starts with a university followed by a company (for example, in the 2023 overall rankings, the Chinese Academy of Sciences holds the top spot, with Harvard ranking 4th, Google at 5th, Microsoft at 20th, and MIT at 31st).
Academic Influence provides university rankings on its website using a unique methodology that distinguishes them from others.18 They use machine learning to collect and analyze open-source data from publicly available sources like Wikipedia, Crossref, and Semantic Scholar. They argue that their rankings are objective because they occur without human intervention once the data are gathered. In 2023, the top ten most influential universities were listed as: 1) Harvard, 2) Columbia, 3) Chicago, 4) University of California, Berkeley, 5) Yale, 6) MIT, 7) Princeton, 8) Stanford, 9) University of Michigan, and 10) Cornell.
It is important to note that there is no centralized website or index that aggregates all global university rankings. However, in 2015, geographer Vladimir Moskovkin and colleagues proposed a methodology that calculates the Aggregated Global University Ranking (AGUR) by using machine-learning and mining-data algorithms to compare and aggregate positions from various global rankings.19 In 2019, the University of New South Wales in Sydney developed the Aggregate Ranking of Top Universities (ARTU), which uses THE, QS, and ARWU to generate an aggregate score.20
There are also websites, like TcPalm, that use data from the National Center for Education Statistics (NCES) and the Department of Education on crimes occurring on college campuses to compile a “college crime ranking.” These rankings track the number of crimes that occur both on and off campuses at colleges, universities, and postgraduate institutions. Users have the option to select a category (such as criminal offenses, violence against women, hate crimes, arrests), choose a specific year between 2014 and 2020, and pick a specific state or the entire country. The platform then generates a ranking of institutions based on the number of reported incidents in the chosen category and timeframe.
While it is possible to scrutinize each ranking criterion from a scholarly perspective and provide a scientific explanation for its accuracy and importance, what matters to many people is the final product: a list in descending order. International university rankings can be a useful tool for comparing universities and identifying trends and patterns in higher education. As a scholar of higher education, however, I emphasize that it is nearly impossible to create a comprehensive and inclusive ranking table that caters to all students from all backgrounds with different personal agendas. An accomplished Chinese student who is eyeing a prestigious U.S. university will probably have different criteria in their decision-making compared to an accomplished American student aiming for the same university. Whether in Finland or Türkiye or the United States or China, it is important for stakeholders, especially students, to consider multiple factors when making decisions about their education.
We should acknowledge that “Rankings are here to stay.”21 Regardless of individual opinions on rankings, their influence on the higher-education sector is undeniable, and international university rankings have been playing an important role in higher education for decades now.
On the one hand, university rankings can help students, researchers, and policymakers to make more informed decisions (such as where to study or collaborate), and enable university leaders to focus on certain areas that are beneficial to students. On the other hand, methodologies and criteria used by ranking systems are not without bias. They are subjective in various ways, which leads to unfair or inaccurate representations of universities. Simply put, the playing field is not level.
Rankings have been creating pressure on universities to prioritize certain metrics over others, potentially leading to a narrow focus on research and internationalization at the expense of other important aspects of higher education, such as teaching and service.22 Perhaps one of the most notable examples of the impact of subjective methodologies used in rankings surfaced early in this decade, when two highly prestigious universities in the United States made headlines with their decision concerning the U.S. News rankings.
In 2022, both Harvard Law School and Yale Law School withdrew from the U.S. News rankings because of concerns about the ranking system’s methodology and incentives. Harvard had previously expressed concerns about the ranking’s impact on socioeconomic diversity and allocation of financial aid based on need, as well as the heavy weighting of test scores and grades.23 Yale Law School had similar concerns. In mid-January 2023, Harvard Medical School announced its decision to withdraw from U.S. News rankings due to concerns that rankings encourage institutions to prioritize boosting rankings over nobler objectives. Other prestigious medical schools in the United States followed this decision, indicating a trend that could spread to more universities and departments.
Rankings not only put pressure on universities to prioritize certain metrics over others, but also create a highly uneven playing field. The decision by Harvard and Yale’s law schools to stop participating in the U.S. News rankings highlights the impact of these subjective methodologies on universities. It is, therefore, important for universities and ranking systems to collaborate and ensure that the ranking process aligns with the best ideals of education and does not compromise the quality of education for students.
The practice of universities attempting to manipulate ranking criteria and provide misleading information to improve their rankings, commonly referred to as “gaming the system” is, unfortunately, widespread. This problem of manipulating ranking data has been observed across the higher-education spectrum, from lesser-known institutions to world-renowned universities, on multiple continents.
When success or failure is defined solely by numerical metrics, the potential for corruption increases. The famous principle in the social sciences known as Campbell’s Law states that “the more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”24 In other words, the more that a particular metric or indicator is relied upon to make important decisions, the more likely it is to become distorted and unreliable. There are various reasons why this could happen, such as manipulation and other corrupt practices to achieve a desired outcome, or simply because the metric becomes less useful or relevant over time as conditions change.
For many institutions, placing high in the rankings is one of the most important goals, because an undetermined but possible large portion of their revenue depends on their performance in the ranking leagues. The significant impact of rankings on the reputation and perceived quality of an institution has become such an important aspect of the global higher-education landscape that universities and higher-education systems around the world have become increasingly focused on improving their rankings, with some resorting to gaming the system by finding ways to manipulate the ranking criteria in their favor.
This tactic comes with serious consequences, both for institutions and ranking organizations, but also for the larger higher-education community, such as the broader network or ecosystem of institutions, organizations, professionals, students, policymakers, and stakeholders involved in higher education. Institutions engaging in such tactics risk losing funding, damaging their reputation, and facing long-term consequences such as a decline in the quality of education offered and difficulty attracting top students and faculty. While these actions undermine the integrity of institutions, and lead to a lack of trust in the reliability of universities, they also reduce confidence in higher education’s overall trustworthiness.
A recent scandal at Columbia University highlights the question of the trustworthiness of university rankings. A mathematics professor accused the university of submitting false statistics to U.S. News rankings, resulting in a significant drop in the university’s ranking. Columbia acknowledged the errors and pledged to improve. This raises the concern that if a highly prestigious institution like Columbia felt the need to submit false data, what does this say about the trustworthiness of rankings for other, less scrutinized universities?
The answer is straightforward: as long as rankings remain significant, there will always be attempts to manipulate the system. The success of these attempts will vary depending on the type of manipulation. There have been—and, unfortunately, will continue to be—instances in which universities are accused or found guilty of corrupt practices that manipulate their rankings. Some may resort to “buying citations” from highly cited researchers, while others may falsify student selectivity data, or overstate GPA and enrollment data.25 These examples emphasize the need for transparent and reliable ranking methods, as well as regular audits and checks to guarantee the accuracy of data used in these rankings.
Overall, it is important for universities to approach the ranking process with integrity. Universities need to prioritize the ethical reporting of data, and the ranking organizations should have more robust ways of verifying said data. As Campbell’s Law highlights the dangers of overreliance on quantitative indicators in decision-making (and underlines the need for multiple sources of information, as well as a more nuanced approach to evaluation), it is the ranking organizations and universities’ combined responsibility to prevent such efforts to game the system.26
It is evident that current major university rankings favor certain types of institutions over others. Universities lacking certain facilities or departments, especially those without medical schools, face a significant disadvantage in traditional rankings. At this time, health-related research is the largest global field of science and accounts for about one-third of all publications, and rankings give considerable weight to the number of publications.27
Nevertheless, there is strong evidence that universities focusing on specific areas of study can still achieve success in those areas, even with lower rankings in standard evaluations. For instance, Wageningen University & Research in the Netherlands has been consistently named the world’s most sustainable university by UI GreenMetric since 2017, and University of California, Davis, holds the top spot among U.S. institutions in the same evaluation, ranking fifth in the world. This pattern offers a different starting point for considering rankings from a constructive perspective.
Since rankings are an integral part of the higher-education sector, and because they will in all likelihood maintain their importance for the foreseeable future, efforts to ignore rankings or replace them with alternative evaluation methods will probably not succeed in the short term. While we cannot completely eliminate rankings—nor should we necessarily endeavor to do so, as there are areas in which they have positively impacted higher education—we can work toward improving their diversity and reliability.
Improving university rankings is not an easy task. It requires a combined effort of universities, ranking organizations, and, to some extent, governments. One solution would be to diversify the ranking criteria by including highly important but often disregarded factors such as student experience, service for the public good, diversity of campuses, and public engagement efforts. Rankings should also aim to represent the experiences of different constituents (in other words, students, faculty, staff, and perhaps even the community members outside of those on campus). For greater fairness and precision, rankings should concentrate on particular elements of educational institutions, rather than providing a blanket approach and drawing generalized conclusions.
A shift toward more specialized rankings that focus on individual areas instead of the entire institution could level the playing field and allow for a more informed and comprehensive assessment, eliminating certain advantages held by established institutions in the English-speaking world and showcasing unique strengths in areas that have not been previously emphasized. This approach could lead to a more informed and dynamic understanding of higher-education institutions, and help drive improvements in transparency and outcomes.
Furthermore, rankings can (and should!) use the measure proposed by Wendy Fischman and Howard Gardner called Higher Education Capital (HEDCAP), which encompasses the ability to attend, analyze, reflect, connect, and communicate on important issues.28 Factors that contribute to the development of HEDCAP may be difficult to demonstrate despite the benefits of a college education. For instance, an increase in HEDCAP over the course of matriculation should be included in rankings as a metric for assessing the effectiveness of colleges and universities in instilling these essential skills in their students.
University rankings have always been susceptible to disputes. Recently, the number of controversies and scandals surrounding university rankings has risen sharply. This has led to a growing realization that the existing ranking systems need improvement because they do not produce fair and comprehensive rankings. Moreover, those that are attempting to use novel approaches and create unconventional lists are either underdeveloped or have not captured the attention of stakeholders outside the rankings community.
International ranking tables have typically focused on certain measures such as research output and reputation, which have exacerbated the inequality between the old and prestigious institutions and the rest. In the short term, the controversies surrounding the rankings and changing demographics in higher education will most likely push ranking organizations to be more forward-thinking, include more criteria in their data, and alter their methodologies to reflect the diversity of institutions across the globe. This will likely provide temporary relief to universities’ objections to rankings, but the law and medical schools’ boycott of the U.S. News has opened Pandora’s box and will likely spread to other schools and rankings in the near future.
In the long term, I anticipate that university rankings will be characterized by a greater focus on nontraditional areas such as public engagement, student satisfaction, diversity on campus, and sustainability. Public engagement is particularly critical as it demonstrates the commitment of universities to serving the communities of which they are a part, and the positive impact they can have beyond the traditional areas of teaching and research. I believe it is only a matter of time before this becomes a major section of its own in international ranking tables.
Another important criterion to assess would be democratic values on campus. Global Public Policy Institute (GPPI) in Germany conducted a study in 2021 on academic freedom, published as “Academic Freedom Index.”29 GPPI does not rank the institutions, but instead, they list the countries based on their universities’ level of freedom. I believe that incorporating democratic values into rankings could provide valuable insights and add a new dimension to the ranking systems. It would be interesting to explore this further and see how it can be done in a fair and unbiased manner.
A comprehensive ranking system that takes into account not just the academic achievements, but also the values and practices that the university promotes, such as democratic values and open-mindedness, can be quite useful for stakeholders. Measuring democratic values on a campus might be challenging as it can vary greatly across different countries. What is considered a minor comment in the United States might be a reason for termination in Türkiye—or even a more serious outcome in China. Hence, finding a universal “common denominator” for democracy on campus that is not biased toward a specific country would be difficult.
In the future, I envision university rankings that are more tailored to specific areas and needs. These rankings will be narrower in scope but provide greater detail within their focus area. This will be beneficial for both students and higher-education institutions, as it will allow institutions to experiment and excel in specific areas, and create a more level playing field in terms of competition. Because the current system of rankings is often criticized for being too broad and not highlighting institutions’ unique strengths in particular areas, a more specialized ranking system that reflects the diversity of institutions and, above all, meets the needs of a diverse body of students would provide a more accurate picture of each institution’s strengths and weaknesses.
University rankings have become a common tool in higher education, used by various stakeholders for a range of purposes. Despite their undeniable popularity, they are often criticized for their reliance on narrow, quantifiable metrics and their inability to capture essential elements of higher education such as service, teaching, and public good mission. Despite the criticisms, university rankings continue to play a significant role for decision-making and resource allocation for government officials, as well as marketing purposes for university administrators. University rankings may be useful tools for institutions to measure their perceived prestige and reputation; however, they do not always provide students and parents with a complete picture of what a college or university can offer. Factors such as class size and retention rates can be important considerations when selecting a school, but they do not necessarily reflect the quality of education that students will receive—or their overall experience at the institution.
There is a clear need to improve the diversity and reliability of university rankings. This can be accomplished through a concerted effort between universities, ranking organizations, and governments, and by moving toward the creation of specialized rankings that consider a wider range of criteria beyond traditional metrics. Nontraditional metrics, such as public engagement, student satisfaction, diversity, and sustainability might offer a more comprehensive and nuanced understanding of higher-education institutions. In light of these potential improvements, the future of university rankings will likely involve a shift toward increasingly tailored and specialized rankings, offering a more informed and dynamic perspective on the state of higher education.
Endnotes
- 1For readers outside the United States, the term departments means faculties.
- 2Philip G. Altbach, “Rankings Season Is Here,” in The International Imperative in Higher Education: Global Perspectives on Higher Education, ed. Philip G. Altbach (Berlin: Springer, 2013), 267–277.
- 3Ibid.
- 4Simon Marginson, “The Global Multiversity,” in The Dream Is Over: The Crisis of Clark Kerr’s California Idea of Higher Education (Oakland: University of California Press, 2016), 71–80; Andrejs Rauhvargers, Global University Rankings and Their Impact: EUA Report on Rankings (Brussels: European University Association, 2011); and P. T. M. Marope, Peter J. Wells, and Ellen Hazelkorn, eds., Rankings and Accountability in Higher Education: Uses and Misuses (Paris: UNESCO, 2013).
- 5J. McKeen Cattell, A Statistical Study of American Men of Science, reprinted from Science (1910) (accessed March 12, 2024), 591.
- 6Raymond M. Hughes, A Study of the Graduate Schools of America (Oxford, Ohio: Miami University, 1925).
- 7The Times Higher Education World University Rankings (accessed March 15, 2024); the QS World University Rankings (accessed March 15, 2024); and the Academic Ranking of World Universities (accessed March 15, 2024).
- 8U.S. News & World Report Best Colleges Rankings (accessed March 15, 2024).
- 9“World University Rankings 2024: Methodology,” Times Higher Education, September 20, 2023.
- 10Craig O’Callaghan, “QS World University Rankings Methodology: Using Rankings to Start Your University Search,” November 27, 2023.
- 11Jeremy Bauer-Wolf, “U.S. News Shakes Up Rankings Methodology—But Top Colleges Held Their Spots,” Higher Ed Dive, September 18, 2023.
- 12“Information about the CWTS Leiden Ranking,” CWTS [Centre for Science and Technology Studies] Leiden Ranking Open Edition (accessed March 15, 2024).
- 13“U-Multirank Project,” U-Multirank (accessed March 15, 2024).
- 14“UI GreenMetric World University Rankings: Background of The Ranking,” Universitas Indonesia (accessed March 15, 2024).
- 15“About Us,” Webometrics (accessed March 15, 2024).
- 16The Editors, “A Note on Methodology: 4-year Colleges and Universities,” Washington Monthly, August 27, 2023.
- 17“Ranking Methodology,” The SCImago Institutions Rankings (accessed March 15, 2024).
- 18“About AcademicInfluence.com,” Academic Influence (accessed March 15, 2024).
- 19Vladimir Moskovkin, Nikolay Golikov, Andrey Peresypkin, and Olesya Serkina, “Aggregate Ranking of the World’s Leading Universities,” Webology 12 (1) (2015).
- 20“Methodology of UNSW’s Aggregate Ranking of Top Universities (ARTU),” UNSW Sydney (accessed March 15, 2024).
- 21Ellen Hazelkorn, “Impact of Global Rankings on Higher Education Research and the Production of Knowledge,” UNESCO Forum on Higher Education, Research and Knowledge, Occasional Paper No. 15 (Paris: UNESCO, 2009), 5.
- 22Altbach, “Rankings Season Is Here.”
- 23John Manning, “Decision to Withdraw from the U.S. News & World Report Process,” Harvard Law Today, November 16, 2022.
- 24Donald T. Campbell, “Assessing the Impact of Planned Social Change,” Evaluation and Program Planning 2 (1) (1979): 67–90.
- 25For “buying citations,” see Megan Messerly, “Citations for Sale,” The Daily Californian, December 5, 2014; for falsification of student selectivity data, see Paloma Esquivel, “USC Education School Omitted Key Data for U.S. News & World Report Rankings, Report Says,” Los Angeles Times, April 29, 2022; and for overstating GPA and enrollment data, see Colin Evans, “A Year after Rankings Scandal, Fox Dean Pushes for Transparency and Stability,” The Temple News, June 20, 2019.
- 26Campbell, “Assessing the Impact of Planned Social Change.”
- 27National Science Board, “Publications Output: U.S. Trends and International Comparisons; Publication Output by Field of Science,” National Center for Science and Engineering Statistics (accessed March 15, 2024).
- 28Wendy Fischman and Howard Gardner, The Real World of College: What Higher Education Is and What It Can Be (Cambridge, Mass.: The MIT Press, 2022).
- 29“Assessing Academic Freedom Worldwide,” Global Public Policy Institute (accessed March 15, 2024).