The positive side of discursive disagreements in the
Journal of Informetrics (in press)
In his comments on the debate between Butler (2003) and van den Besselaar, Heyman, & Sandström (2017), Martin (2017, at p. 937) describes an earlier debate between his group and me in the 1980s about “the decline of British science” as an example of disagreement which, in his opinion, is inherent to the social sciences. A working party is said to have concluded “that Leydesdorff’s use of ‘whole counting’ failed to take account of the fact that, with this particular indicator, virtually all countries’ shares were increasing (because of the growing international collaboration) […].” At the time, I was not informed about this party or its report. However, my argument was that using fractional counting, “a simple increase in international co-authorships could ceteris paribus cause a decline in national performance” (Leydesdorff, 1988, pp. 150f.). Internationalization had thus led to what appeared to be a decline of British science.
This effect was, moreover, reinforced by the use of a fixed (1973) journal set by my opponents (Narin, 1976). The innovativeness of British science—in terms of both internationalization and the exploration of new developments—was not sufficiently appreciated using these methods. I advocated the use of the online Science Citation Index, which includes new journals, albeit with a delay. In this dynamic dataset, the UK was not losing ground during the period under discussion (Braun, Glänzel, & Schubert, 1991; Kealey, 1991; Leydesdorff, 1991, p. 365; cf. Martin, 1991). In my opinion, “the decline of British science” was a scientometric artifact based on these two erroneous assumptions: (i) using fractional counting, internationalization was counted negatively and (ii) using a fixed journal set, new developments were not sufficiently appreciated. At that time, however, the decline-argument could be used in a science-policy context (e.g., Irvine & Martin, 1986).
September 2017, University of Amsterdam
Braun, T., Glänzel, W., & Schubert, A. (1991). The Bibliometric Assessment of UK Scientific Performance: Some Comments on Martin’s Reply. Scientometrics, 20, 359-362.
Butler, L. (2003), Explaining Australia’s increased share of ISI publications: The effects of a funding formula based on publication counts. Research Policy, 32, 143-155.
Irvine, J., & Martin, B. R. (1986). Is Britain spending enough on science? Nature 323, 591-594.
Kealey, T. (1991). Government-funded academic science is a consumer good, not a producer good: A comparative reassessment of Britain’s scientific and technological achievements since 1794 and a comment on the bibliometry of B. Martin and J. Irvine. Scientometrics, 20(2), 369-394.
Leydesdorff, L. (1988). Problems with the ‘measurement’ of national scientific performance. Science and Public Policy, 15(3), 149-152.
Leydesdorff, L. (1991). On the ‘Scientometric Decline’ of British Science: One Additional Graph in Reply to Ben Martin. Scientometrics, 20, 363-368.
Martin, B. R. (1991). The Bibliometric Assessment of UK Scientific Performance: A Reply to Braun, Glänzel and Schubert. Scientometrics, 20, 333-357.
Martin, B. R. (2017). When Social Scientists Disagree: Comments on the Butler-van den Besselaar Debate. Journal of Informetrics 11(3), 937-940; doi.org/10.1016/j.joi.2017.05.021
Narin, F. (1976). Evaluative Bibliometrics: The Use of Publication and Citation Analysis in the Evaluation of Scientific Activity. Washington, DC: National Science Foundation.
van den Besselaar, P., Heyman, U. and Sandström, U. (2017). Do observations have any role in science policy studies? A reply, Journal of Informetrics (early view); doi.org/10.1016/j.joi.2017.05.022