‘There is certainly enthusiasm to explore new ways of assessing research’



The need for new ways to evaluate research is high, but changing long-standing, deeply ingrained evaluation practices is difficult. Many academic researchers share a feeling of uncertainty on finding alternatives to the long-established methods of assessing academic research.

The urgency for change is clear. One should realise that the way we evaluate and reward research has a strong effect on the research agenda. The current system does not incentivise or reward research on problems with great societal needs, and neither does it publish such research in ‘top journals’ with a high impact factor. “In medical research for example,” explains Frank Miedema, vice-rector for Research at Utrecht University, “research on the more basic aspects of the origination and development of a stroke gets published in the top journals, which is good for the CV. But research on rehabilitation, how patients can be helped to regain the ability to speak and improve the quality of their life after suffering a stroke gets into ‘less prestigious’ journals, despite the huge societal impact it brings. That’s very problematic for science and society.” “Our mission,” adds Sarah de Rijcke, Professor of Science, Technology and Innovation Studies at Leiden University, “is therefore to broaden the space in which we can do research. We can work on that through our evaluation procedures and the norms and values displayed in them.”

A broader scope

But how do you evaluate research and set criteria for academic staff, if not through the dominant practice of bibliometric indicators? “At the University Medical Center Utrecht,” says strategic policy advisor Rinze Benedictus, “we recently decided to broaden the scope of our mandatory six-year self-evaluation, including a broad set of factors as the basis for our evaluation. In response, the assessment committee indicated they were very uncomfortable – uncomfortable! – assessing our research. They’d rather just have quantitative indicators.” Luckily, the balance is shifting rapidly, and for all the feelings of uncertainty, there is an equal amount of enthusiasm to explore new ways of assessing research.

What’s next

  • To everyone who has a feeling of uncertainty and unease on how to evaluate academic research: We all share this feeling.
  • Take care not to move into a bureaucratic, administrative mode without first discussing the values we think are important.
  • When hiring for academic functions, stop thinking on the basis of an individual and start thinking in teams.
  • To avert the risk of subjective assessment of candidates for academic posts, clearly articulate the missions of the work and the types of criteria one will be held accountable to in a couple of years.
  • Involve HR in evaluation processes to relieve pressure on evaluators who skip to the metrics due to time constraints.
  • Bibliometric indicators are still relevant when assessing research, as long as they are contextualised. Don’t apply the same standard to every setting, level or discipline.
  • Assessment committees should receive training. For example: Diversity problems aren’t solved by simply adding a woman to a committee; the entire committee needs to be aware of gender bias.

Recognition & Rewards

Share this magazine

Share on twitter
Share on linkedin
Share on facebook