Jobs in Education System

Improving varsity ranking systems

EducationWorld July 16 | EducationWorld

Brig (Dr.) R.S. Grewal is vice chancellor of Chitkara University, Himachal Pradesh

In April, the Union human resource development (HRD) ministry published its inaugural league table ranking India’s Top 100 universities under its uniquely indigenous National Institutional Ranking Framework (NIRF). Coincidently a few weeks earlier on March 2 at Melbourne University, presidents and vice chancellors of universities from around the world had convened at the Conference of Asia Pacific Association for International Education, 2016 to debate the theme ‘Are Rankings Destroying Universities?’ Thus, as the world debates the utility of the World University Rankings popularised by London-based Quacquarelli Symonds (QS) and Times Higher Education (THE), the HRD ministry has introduced the NIRF system for rating and ranking India’s higher education institutions.

The arguments advanced for the NIRF league tables are that they provide benchmarks in an environment where higher education institutions compete for students, faculty, finance and researchers, and they build accountability and quality assurance into the higher education system.

Emerging as a subnational phenomenon in the US in the early 1900s, university rankings developed into a national practice in 1959, and were transformed into the annual World University Rankings (WUR) by QS and THE in the new millennium. Today, there are ten global college/university ranking-systems and more than 150 national and specialist ranking systems worldwide.

The rationale for conducting institutional ranking surveys is that students, schools, industry, employers and funding agencies need them. Institution ranking league tables provide purportedly valuable markers of academic reputation, quality of research, and overall public perception which prompts internationalisation of higher education institutions.

However, as it became apparent at the Asia Pacific Conference 2016, there are critics who maintain that current ranking systems measure widely diverse institutions with a common yardstick while ignoring relevant criteria such as institutional vision and mission.

Another criticism is that academic quality is a complex subject measured by simplistic methodologies. Moreover, rankings tend to encourage practices that sacrifice teaching-learning on the altar of parameters accorded greater weightage e.g, research publications. Value-additions to teaching-learning, impact of research on teaching, benefits of research and extension activities aren’t accorded weightage in metrics developed to rate education institutions. Consequently, the real contributions that education institutions make to socio-economic development remain unmeasured.

All league tables ranking higher education institutions measure quality through proxies. For example, students’ entrance test scores are taken as a measure of academic quality. However, they are reflective of the quality of primary-secondary education rather than the higher education experience. Similarly, faculty-student ratios reflect resource mobilisation efficiency rather than teaching quality and are also dependent on type of institution i.e, public or private. Further, measuring resources (like number of library collections) or institutional expenditure on various activities is not a true measure of education quality, because newly established institutions, especially in developing countries, suffer a historical disadvantage.

Another obsession that has influenced ranking systems is measuring research productivity through number of publications and citations in refereed journals. But this parameter merely indicates peer reviewed papers and ignores the social and economic impact of research.

Nevertheless, there’s no denying the need for transparent institutional ranking systems which measure accountability and enable comparability. Continuous expansion of Indian higher education, student mobility, globalisation, public accountability, and entry of a spate of private edupreneurs into higher education necessitates development of tools and systems which enable comparison of institutions based on requirements of stakeholders.

Therefore, systems that enable multi-dimensional rankings which measure value to the community need to be developed. The main objective should be to measure what is of value rather than valuing what is measured. The objective of the NIRF rankings should be to improve the capabilities and quality of higher education institutions instead of highlighting the scores of elite institutions. Parameters which measure the contribution of institutions to society and the academy in application-oriented research, engagement with industry, resource allocation and utilisation, are required.

Meaningful new parameters which evaluate the impact of higher education institutions upon their host and wider societies, and measure institutional achievements against articulated mission and goals need to be conceptualised because standardised ratings and ranking criteria serve negligible social purpose. There can’t be a common ranking system that caters to all requirements. Varied institutional rankings systems would serve the needs of society better.

Current Issue
EducationWorld April 2024
ParentsWorld February 2024

Xperimentor
HealthStart
WordPress Lightbox Plugin