Field-normalized citation impact indicators using algorithmically constructed classification systems of science Articles uri icon

authors

  • RUIZ-CASTILLO UCELAY, JAVIER
  • WALTMAN, LUDO

publication date

  • January 2015

start page

  • 102

end page

  • 117

issue

  • 1

volume

  • 9

International Standard Serial Number (ISSN)

  • 1751-1577

Electronic International Standard Serial Number (EISSN)

  • 1875-5879

abstract

  • We study the problem of normalizing citation impact indicators for differences in citation practices across scientific fields. Normalization of citation impact indicators is usually done based on a field classification system. In practice, the Web of Science journal subject categories are often used for this purpose. However, many of these subject categories have a quite broad scope and are not sufficiently homogeneous in terms of citation practices. As an alternative, we propose to work with algorithmically constructed classification systems. We construct these classification systems by performing a large-scale clustering of publications based on their citation relations. In our analysis, 12 classification systems are constructed, each at a different granularity level. The number of fields in these systems ranges from 390 to 73,205 in granularity levels 1-12. This contrasts with the 236 subject categories in the WoS classification system. Based on an investigation of some key characteristics of the 12 classification systems, we argue that working with a few thousand fields may be an optimal choice. We then study the effect of the choice of a classification system on the citation impact of the 500 universities included in the 2013 edition of the CWTS Leiden Ranking. We consider both the MNCS and the PPtop 10% indicator. Globally, for all the universities taken together citation impact indicators generally turn out to be relatively insensitive to the choice of a classification system. Nevertheless, for individual universities, we sometimes observe substantial differences between indicators normalized based on the journal subject categories and indicators normalized based on an appropriately chosen algorithmically constructed classification system. (C) 2014 Elsevier Ltd. All rights reserved.

keywords

  • characteristic scores; research performance; scales; distributions; universality; networks; skewness