Electronic International Standard Serial Number (EISSN)
1588-2861
abstract
This paper studies the evaluation of research units that publish their output in several scientific fields. A possible solution relies on the prior normalization of the raw citations received by publications in all fields. In a second step, a citation indicator is applied to the units' field-normalized citation distributions. In this paper, we also study an alternative solution that begins by applying a size- and scale-independent citation impact indicator to the units' raw citation distributions in all fields. In a second step, the citation impact of any research unit is calculated as the average (weighted by the publication output) of the citation impact that the unit achieves in each field. The two alternatives are confronted using the 500 universities in the 2013 edition of the CWTS Leiden Ranking, whose research output is evaluated according to two citation impact indicators with very different properties. We use a large Web of Science dataset consisting of 3.6 million articles published in the 2005-2008 period, and a classification system distinguishing between 5119 clusters. The main two findings are as follows. Firstly, differences in production and citation practices between the 3332 clusters with more than 250 publications account for 22.5 % of the overall citation inequality. After the standard field-normalization procedure, where cluster mean citations are used as normalization factors, this quantity is reduced to 4.3 %. Secondly, the differences between the university rankings according to the two solutions for the all-sciences aggregation problem are of a small order of magnitude for both citation impact indicators.