preprint version                  


               Loet Leydesdorff and Olga Amsterdamska [1]

                    Department of Science Dynamics

                        Nieuwe Achtergracht 166

                           1018 WV Amsterdam

                            The Netherlands



       The use of citations as science indicators has led to recurrent calls for a theory of citation. A number of such theories have been proposed and citations have been analysed with respect to their 'life cycles', their nature--as either perfunctory or substantive--and as indicators of codification processes in the various sciences. The various analyses of citations, however, tend to emphasize the social and the cognitive dimensions of citations treating each of these categories as homogeneous. In contrast, our analysis tries to differentiate among the various types of cognitive and social functions of citations.

       In a recent study,[2] we showed that citations can play a number of distinct roles in arguments presented in the citing papers. In this study, we link the results of this textual analysis to the results of a questionnaire in which the authors who had cited one of four biochemistry papers which originated from a single laboratory, were asked to indicate why they cited that paper, how they came to know about it, what role it had played in their research or their argument, whether they accepted the knowledge claim in the original paper or not, whether they knew any of the original authors, etc. As may be expected, the subjective reasons why a paper is cited do not correspond directly to the textual uses of the citation in the citing text.

       We discuss some consequences of our findings for a theory of citation, and for the use of citations in science studies.



       Citation analysis has conquered the world of science policy analysis. Aggregates of citations are commonly used in evaluation studies as indicators for the 'impact' of publications, as one of the measures of the 'quality' of research groups or even of individual researchers. Co-citation maps of scientific specialties,[3] and also increasingly citation patterns among journals are used to describe the development of disciplines and specialties, and to identify emerging areas of scientific inquiry.[4] 

       The fact that publication and citation behavior varies among disciplines,[5] was noted shortly after the appearance of the first Science Citation Index in 1961.[6] These differences in the citing practices of scientists from different disciplines and specialties,[7] and even systematic differences in citing patterns within specialties[8] have led some analysts to the conclusion that comparisons of citation counts are meaningful indicators of variation in performance only when one compares groups working within a single field and within similar institutions. The call for the comparisons of "like with like,"[9] however, did not eliminate all criticisms of the policy uses of citation studies.  Most recently such criticisms have been presented by MacRoberts and MacRoberts who argued that citing practices are incomplete and biased, and thus should not be used for testing hypotheses without proper caution.[10]  Controversies surrounding the uses of citation analysis have also led to a call for a theory of citation which would provide a better foundation for various policy uses of citation studies.[11]  In response, various functions of citations have been distinguished and discussed.[12] Citations have been analyzed in terms of their institutional functions as part of the reward system in science.[13]  It has also been suggested that citations should be understood as concept-symbols,[14] rhetorical tools,[15] or as textual elements fulfilling specific cognitive functions in arguments presented in the citing papers.[16]

       In our opinion, the difficulties in arriving at a systematic understanding of the citing behavior of scientists and of the aggregate citation patterns stem not only from theoretical differences between the proponents of the various theories of citation, but also from the relative lack of attention to the inherently multidimensional character of citation, and thus also to the complex significance of aggregate citation patterns. First, it is necessary to acknowledge that providing an answer to the question "why scientists cite" will not necessarily furnish an adequate explanation of aggregate citation patterns.  Secondly, an analysis of the significance of citations has to specify whether citations are to be studied as indicators of links among texts, of links among authors of these texts or as indicators of links between texts and authors. Distinct theoretical and methodological problems arise in each of these perspectives.[17]

       In a recent review article, Susan Cozzens emphasized the dual nature of citations which according to her operate within "two systems": the rhetorical system (which we would prefer to call more traditionally, the cognitive system of science) and the reward system (which could be more broadly seen as the social system of relations between scientists).[18]  When citations are seen as establishing links of whatever nature between two texts, what is being analyzed is the cognitive content of this relation; when citations are regarded as establishing links between authors what is being studied is the social organization of scientific communities. In practice, the two aspects of citations are of course thoroughly intertwined. Their analytical separation, however, is necessary if we want to clarify the assumptions about the relations of the social and cognitive aspects of science underlying the various forms of citation analysis.[19]


                     citing author             citing text


cited author        professional relation    reward                 


cited text          cognitive resource       evidential context    



      The particular interest which citation analysis has had for science studies and for the evaluation of the performance of groups or individuals stems from the fact that citations appeared as a particularly useful indicator of links between the social and the cognitive dimensions of science: for example, the number of times an article was cited, could be seen as an indicator of the performance of the cited author(s) (thus translation was made from the cognitive use of citation in a text to the social system of rewards operating in the scientific community).  Alternatively, since the citing authors competing for recognition within their communities were said to use cited texts as a means of convincing others (a form of enrolment), citations could be analyzed as a means of persuasion (translation here was from the social organization of scientific communities in which reputations are unequally distributed to the cognitive organization of science which was seen as an outcome of social processes). It was precisely the translations between the social and the cognitive aspects of citations which were always seen as the most analytically problematic: Do citations present in texts say something about the cited authors? Do they reflect some form of strategic behavior on the part of the citing author? How can we tell when a citation reflects a cognitive debt and when it is mainly a reflection of a social hierarchy within the scientific community? Is it legitimate to equate the two dimensions of citations treating the social and the cognitive organization of science as isomorphic? How can such questions be empirically addressed?[20] 

    In a recent study,[21] we have studied citations as relations among texts arguing that if scientific texts are treated as attempts to integrate new knowledge claims into a fabric of (shared) knowledge, citations can be shown to fulfill a number of distinct argumentative functions in the citing articles. The citing texts not only modify the knowledge claims made in the cited papers, but also use these claims in their own arguments.  Thus, the cited papers can be used to supply premises or supporting evidence for further conclusions, to help in the formulation of a research agenda, or to assist in the placement of the citing text in the context of other research. Comparing such textual functions of citations to four different biochemistry articles, we have also come to the conclusion that the uses to which the cited papers were put was systematically related to the manner in which the original paper integrated its own claims into the body of knowledge in the field. This manner of integration as well as the later uses of the papers when they were cited were distinct for the four biochemistry articles we have examined.

     However, the analysis of the textual links between citing and cited papers tells us nothing about the citing authors' perceptions of the papers they cited, about their reasons for citing this particular paper, or about the social determinants of citation behavior. Relations among texts do not unequivocally elucidate relations among texts and authors or among the cited and citing authors. How do scientists citing a particular paper perceive its significance? Are the differences in the argumentative use to which these papers were put when they were being cited related to differences in scientists' perceptions of their significance? Do scientists who cite a given paper perceive it similarly? How are these perceptions of the cited paper related to the personal links between the cited and the citing authors? What is the relationship between the establishment of textual links between papers and the social organization of the groups of scientists who use the paper in their own texts? In order to explore further these dimensions of citation behavior, we have conducted a questionnaire among the scientists who have cited in their own papers one of the four biochemistry articles we have used in our previous analysis.



    We mailed a questionnaire to the 239 authors who made in their article one of the 91 citations to one of four most highly cited articles published between 1979 and 1982 by one research group in the Department of Biochemistry at the University of Amsterdam.

    The cited research group as a whole studied energy transfer processes across bio-membranes. (The area is well known for the so-called Mitchell debate concerning the role of proton transfers across membranes in oxidative and photo-phosphorylation.[22] Although Peter Mitchell, the originator of the chemiosmotic theory, received the Nobel Prize for Chemistry in 1978, the debate on the chemical and physical principles involved in this type of energy transfer continues to this day.) The professor in this laboratory together with several doctoral and postdoctoral students was involved in an attempt to develop and test a non-equilibrium thermodynamic model of chemiosmotic energy transfers in oxidative phosphorylation in mitochondria and sub-mitochondrial particles. The second senior scientist together with his graduate students was studying the problem of the structure and regulation of the phosphoenolpyruvate:sugar phosphotransferase system in Salmonella typhimurium. The third line of research was being pursued by a doctoral student working in collaboration with some members of a different laboratory in the same institute. Their subject was the transport of amino acids across rat liver membranes and its role in metabolism. Finally a new permanent member of the lab together with a postdoctoral visitor were studying cell differentation processes in Dyctostelium discoideum, which is a problem at the cellular level intrinsically related to transport proces­ses at the membrane. During the four years in question, the group as a whole published some 57 papers, 47 of which were full articles. We studied these articles in detail,  interviewed the director of the group extensive­ly, and were granted access to the files of referee reports, etc.

    The fact that four distinct and yet closely related research lines were being pursued in our laboratory made it convenient to vary the cognitive context in which research takes place while keeping the immediate institu­tional environ­ment constant. For our analysis we chose the four most highly cited papers, each of which conveniently came from a different research line.[23]  Table_1 gives the citation rates, before and after correction for self-citation and 'in-group' citation, and an indication of the citation rate per year.


Table 1

                        citations           after       c/y

                        (- 85/12/31)      correction


Westerhoff 1981              38               30        7.5

Sips 1980                    30               28        5.6

Scholte 1981                 27               17        4.3

Bernstein 1981               21               16        4.0

                            ---              ---         

                            116               91


    The questionnaire (Appendix A) contained 30 questions exploring the citing author's perception of the article he cited, the place of his article in his research program, the perception of research field(s) to which the citing and the cited articles belong, the personal links between the cited and the citing authors and the citing authors evaluation of the work of the group from which the cited article originated. The responses can be classified in appr. 80 variables. Chi-square is used for significance testing when the variables are nominal, Kendall's Tau-B when they are ordinal.

    Since the first nine questions referred specifically to the citing paper we asked the respondents who cited the original paper more than once to answer these nine questions separately for each paper in which the citation appeared. The remaining questions were of a more general nature and were to be answered only once by each citing author independently of how often he or she cited the paper.[24]

    As addresses we used all institutional addresses of the articles, and we did not attempt to retrieve authors who had moved, since that would necessarily introduce another specific bias. All authors were included as mentioned on their published article. In case of more institutional addresses, a copy was sent to each of the addresses. After 6 weeks and after having received only 19% response, we mailed another copy of the questionnaire.[25]


Table 2

Number of Authors; Response rates


                        citations     authors   respondents response

Westerhoff 1981              30             75       38       50.7%

Sips 1980                    28             67       31       46.3%    

Scholte 1981                 17             55       24       43.6%

Bernstein 1981               16             42       14       33.3%

                            ---             --        --

                             91            239       107       44.8%


    Seven authors indicated that they replied also for co-authors; six of them were citing Sips' paper. Since students and technical staff occasionally co-author biochemical papers, we think that the 45% is a reasonable response rate. With the exception of the Bernstein case, we have received an average of more than one response per citing article.

    Of course, the survey method also introduces bias. First, there is a  bias in favour of those with tenured positions, since they are more likely than their non-tenured colleagues and students to remain working at the same institute for several years and were therefore more likely to receive the questionnaires. Secondly, it should be clear that by polling only scientists who have cited one of our papers we are not reaching the entire population of scientists for whom the claim made in our paper might have been relevant. For example, there might be those who used the paper in their research but did not cite it in their work, or those who were simply not aware of the existence of the article when they published on a closely related topic. We cannot correct for these kinds of biases with our material. However, we added an additional question about the three best research groups in the area which allows us to draw a sociogram, and to estimate to what extent our questionnaire has covered the specialty group.


Scientific Identities of the Citing and Cited Authors

    Although the four papers whose reception in the scientific community we studied orginated from the same research group, were published in recognized biochemical journals, and all dealt with problems relating to transport processes taking place in biological membranes, each of the papers belonged to a different line of research and addressed a different scientific audience.  To what extent, however, can we talk of the cited and citing authors as members of the same communities, specialties, or even disciplines? How do they identify themselves and what sorts of group do they form?

    Asked directly whether their work belongs to the same specialty as the research presented in the cited papers, the majority of the respondents answered in the affirmative.  Only in the case of those who cited the paper of Sips there was a substantial number of respondents (30%) who believed that their own work belongs to a different specialty. We asked our respondents to identify the specialty in question, but unfortunately very few took the opportunity to do so.  Only in the case of those who cited the paper by Westerhoff et al. was the response large enough to allow us to draw a conclusion. Of the 17 respondents who fully answered the specialty question, 16 believed that their work belongs to the same specialty as that of Westerhoff. Thirteen of them identified the specialty as bioenergetics, one as energy conversion, one as biological energy transduction, and one as electron transport and phosphorylation in mitochondrial membranes.  The specialty in question is thus obviously seen by our respondents as a single definable area, even if a few give it a somewhat different label (Incidentally, the respondent who thought that his work belongs to a different field, also identified his specialty as bioenergetics). The interpretation of the results in the other cases is far more problematic.  In Bernstein's case there were only four responses: three referring to "signal transduction" and one to "development of Dictyostelium discoideum" (the model organism).  We can only hazard a guess that the research area in question is perceived as distinct by the citing authors. In the case of those who cited either Sips or Scholte, no conlusions can be drawn at all.  In the case of Scholte there were only four responses, but--unlike the case of Bernstein--they were too diverse to justify even a tenuous guess.[26]  The seven respondents citing Sips who indicated the specialty their research belongs to, mentioned biochemistry (3 times), alanine metabolism, hepatocytes, biochemical physiology, and liver chemistry.

    We have also asked our respondents about their disciplinary identities (see Table 3). Not surprisingly, the majority of our respondents identified themselves as biochemists (69% of the 61 respondents who answered this question). Relatively few of them, however, chose biochemistry as their unique disciplinary affiliation (18 respondents or less than 30%). Fifteen respondents (36%) combined biochemistry with one other field (molecular biology, physiology, medical sciences, or microbiology), and nine chose 3 or more labels to describe their disciplinary identity (one respondent needed as many as 6 categories). Among those who did not regard themselves as biochemists there were 4 physiologists, 3 molecular biologists, 3 medical scientists, 2 biophysicists, a micorbiologist, a geneticist, and a biologist, as well as 4 scientists who chose some combination of these areas.


Table 3

Disciplinary and specialty identities of citing authors




Disciplinary    Wester- Sips    Scholte Bern-   Total  

Identity        hoff                     stein           


                   N= 19    N= 22    N= 15    N= 5     N=61  


Biochemists     15 (79%) 15 (68%) 8 (53%) 4 (80%) 42 (69%)


Molecular       2      -      10 (67) 1      13 (21)



Physiologists   5 (26) 8 (36) 2      -      15 (25)


Medical         -      9 (41) 1      -      10 (16)



Biologists      3 (16) -      2      1      6 (10)


Microbiologists 2      -      6 (40) -       8 (13)


Biophysicists   2      1      -      -      3  (5)


Other           3 (16) 1      3 (20) 1      8 (13)



*Summation does not add up to 100% because of a large number of multiple selections.



    The proportion of respondents who chose biochemistry as well as the choice of the combinations and alternatives differed in the four cases. Among those who cited Sips, for instance, our respondents identified themselves as physiologists (36%) and medical scientists (49%) as well as biochemists (68%). A small proportion of those who cited Westerhoff (26%) saw themselves as physiologists but biochemistry was mentioned by an overwhelming majority (79%) of respondents. Among those who cited Scholte, the most "popular" identity was not biochemistry (chosen by 53%) but molecular biology (67%), though the overlap between the two categories was large (a full 1/3 of the group mentioned both fields).  Moreover 40% of those who cited Scholte saw themselves as microbiologists, again generally in combination with either molecular biology or biochemistry (or both). The sample of those who cited Bernstein's paper is again too small to permit reasonable generalizations, but 4 out of 5 respondents answering this question chose biochemistry as their identity, though in three cases they combined it with biology, cell biology, or molecular biology (the fifth respondent identified himself as a biophysicist).

    Our data on "specialty-" and "disciplinary-" identities suggest two conclusions:

1.  The fact that only about half of our respondents (33) chose a unique disciplinary affiliation and that only 18 (30%) regarded themselves as just biochemists gives some additional support to Whitley's[27] and Studer and Chubin's[28] claim that disciplinary identities in the biological and biomedical sciences are rather fluid. At the same time, however, with one important exception (those who cite Sips), those who cite these papers believe that their work belongs to the same specialty as that reported in the paper being cited. This is so even when the identification of this specialty "by name" is problematic. Obviously, "specialties" cut across disciplinary identities.

2. Despite the fact that the cited papers originated from the same research lab, the combinations of disciplinary identities chosen by the citing authors were quite distinct for each citing group and can be easily understood if one considers the content of the original papers. For example, the presence of a large number of "molecular biologists" and "microbiologists" among those who cite Scholte makes perfect sense when we consider the fact that Scholte studies genetic control of a sugar transport system in a bacterium.  Similarly, the presence of substantial numbers of medical scientist and physiologists among those who cite Sips is perfectly congruent with the fact that his paper examines the regulation of amino acid metabolism in the liver, a physiological process related to various metabolic disorders. The bioenergetic processes studied by Westerhoff et al. constitute a traditional biochemical problem relevant also for physiologists.


Specialty structures

    Given on the one hand the fact that except in the case of Sips the overwhelming number of our respondents located their work within the same specialty as the work they cited, and on the other hand, the fact that their disciplinary identities were far from homogeneous, we might wonder to what extent can we talk about coherent and cohesive communities when we examine such "citing" groups of scientists? To what extent do these scientists agree about the importance of the various groups contributing to the field in question? Did the original papers make reference to the work of the "most important" groups?  And how was this paper received by those groups? We can gain some insight into these queries by examining the responses we received to questions asking our respondents to identify the three most important groups working in their area, and by examining to what extent groups identified as most important in the field cited and were cited by the original papers.

     We asked our respondents to list three most important groups working in their area of research (see Table 4). There were 52 usable responses to this question. Since, as expected, there was no overlap among the responses of authors citing different papers[29] we will consider the cases separately.


Table 4

                  Number of groups mentioned:


                  once           2 times        >2 times       N


  Westerhoff       11 (50%)      5 (23%)         6 (27%)       19

  Sips             16 (69%)      3 (13%)        4 (17%)       15

  Scholte           8 (53%)      1 (7%)         6 (40%)       13

  Bernstein         3            2               2              5


    Despite the small number of cases, we believe we can draw some tentative conclusions about differences between the four cases. Although in all cases, a substantial number of groups were mentioned only once, there are significant differences between the four cases: the respondents who cited Scholte reached a relatively high level of agreement about which groups are the most important, i.e., mentioned more than twice.[30]  The consensus about what are the important groups seems lowest in the case of those who cited Sips, not only 69% of all the groups are mentioned only once, but only 17% are being mentioned more than twice. In the case of Westerhoff, half of the groups considered "most important" are being named only once, and a substantial percentage is named two times, so that the six groups mentioned more than twice account only for 27% of all the named groups. The small number of responses in the case of Bernstein makes it difficult to draw any conclusions in terms of percentages of groups mentioned only once, twice or more than twice, though the degree of consensus about important groups is clearly greater than in the case of Sips.

    Differences among the cases are even more apparent if we consider the percentage of respondents who mentioned the first three most often mentioned groups: The two groups mentioned most often by those who cited Scholte, are each named by 85% of all respondents; and the third group is mentioned by 69% of respondents. In the case of Sips, the most often listed group was recognized only by 47% of respondents, the second most often mentioned group by 40% and the third by 33%.  In the case of Westerhoff, one group was mentioned by 53% of all respondents, one by 47% and one by 26%. In Bernstein's case, the most often mentioned group was selected by 4 of the 5 respondents and another by 3 of the 5.

    These differences between the four cases are also apparent when we consider citations to the most important groups given in our original papers and the citations of these four papers in the work of those groups listed as most important in the particular research area. The work of all the groups listed at least twice as "most important" by those who cited either Scholte or Bernstein was cited in the original papers and authors from each of these groups were among those who cited the original paper from the Amsterdam laboratory. This complete reciprocity observed in Bernstein's and Scholte's cases breaks down in the case of Sips and Westerhoff. Although Westerhoff cited 9 out of the 11 groups which were nominated as "most important" by at least 2 of our respondents, his own paper was cited only in the work of 4 of these 11 groups.  In the case of Sips the relation is reversed: while the citations in the original paper included references only to the work of 3 of the 7 groups listed at least two times as "most important", Sips' paper was cited by scientists from 5 of these 7 "most important" groups.

    Although these data justify the conclusion that the scientists who cited our papers belong to separate and differently structured communities, we should be rather careful in interpreting the meaning of these differences. In the case of Scholte, the relatively high degree of consensus about what are the most important groups together with the fact that all these groups are both cited by and citing the original paper seems to warrant a conclusion that the research area to which Scholte's paper contributed is being pursued by a fairly well delineated community headed by several broadly recognized "elite" groups.  The interpretation of results in the other three cases seems more problematic. A lack of consensus in listing the three most important research groups working in the area might testify either to the fact that those who cite a given paper do not belong to a single coherent research area or it can be attributed to the relatively egalitarian reputational structure within this specific research area or to the presence of some other divisions (such as lack of congitive consensus) within the field. In the case of Sips, it is the first of these interpretations which is most likely since, as we already reported, fully 30% of those who cited Sips and responded to our questionnaire felt that they did not belong to the same specialty as the authors of the cited paper[31]. In the case of Westerhoff, however, the relatively low degree of consensus about the identity of the most important groups working in the area cannot be explained by the fact that the citing authors did not work within a unique specialty since as we have seen an overwhelming majority of respondents believed to have been working in the same area as the authors they cited and have identified this area as bioenergetics. On the basis of the data from the questionnaire, however, it is impossible to ascertain whether the dissensus about the "most important  groups" is in this case a result of a lack of a pronounced stratification within a relatively large field or a result of some other--cognitive or social--divisions which accounts for the relative lack of agreement among the bioenergetists on what are the three "most important groups" working in the field.[32]

    The comparison of disciplinary and specialty affiliations of the citing authors, and of the structure of the communities to which they belonged suggests that

1.the audience of scientists citing a particular scientific paper can be either (a part of) a fairly homogenous and relatively well defined "specialty group," (Scholte, Westerhoff) or alternatively it might constitute a rather heterogeneous collection of scientists who address their work to different reputational communities (Sips)

2. while within some specialty groups one can find a relatively high degree of consensus about the reputational ranking of different groups working within the field (Scholte), the reputational hierarchy in other  specialties--even when they are recognized as such by the scientists working in the area--appears to be far less consensual and more diffuse (Westerhoff). These kinds of differences between the organization of research communities are not apparent from the examination of citation data alone, and yet we might suspect that they will affect citation rates and patterns.


The Social Links Between the Cited and the Citing Authors

    What is the position of the Amsterdam laboratory and its work within these different communities of scientists? Are the citing authors personally acquainted with the authors of the articles they cited?  Does the different structure of the communities citing a particular paper correspond to differences in the social interactions between the cited and the citing authors? 

    An overwhelming majority of our respondents knew of other work originating from the Amsterdam laboratory (91.1%) and cited it often or very often (87.5%). Thus, all those who cited Scholte and Bernstein, and all but one of those who cited Westerhoff's paper knew of other work being done in the laboratory. Only in the case of Sips' paper, some of our respondents (21.1%) did not know of other research performed in the Amsterdam group. Similarly, while the work of Scholte et al., Westerhoff et al., and Bernstein et al. was cited either very often or at least occasionally in the articles of those who cited the paper we studied, there was a substantial minority (29.2%) of respondents who have not cited Sips or any of his co-authors on  more than this one occasion.

    The citing scientists not only knew and often cited the work of the researchers from this laboratory, but many of them also knew the Amsterdam scientists personally (59.3%). Nearly all contacts consisted of meetings at conferences (48.3%) and exchanges of papers or letters (37.9%) though 19% of respondents have also at some point collaborated with the authors they cited, and more than 22% reported visits to each others' laboratories. Here again, the case of Sips stands somewhat apart: while Scholte and/or his coauthors were personally known by 78.6% of the respondents at the time they were cited, Westerhoff et al. by 68.4%, and Bernstein et al by 80%; only a third of those who cited Sips' paper and answered our questionnaire personally knew either him or one of his co-authors. Those who cited Sips' paper were also the least likely to exchange papers or letters with the original authors (15% in Sips' case vs 42% for Westerhoff, 50% for Scholte, and 4 out of 5 for Bernstein), engage in collaborations with them (10% for Sips vs 21% for Scholte, 21% for Westerhoff, and 2 out of 5 for Bernstein) or visit each others laboratories (none for Sips vs 43% for Scholte, 21% for Westerhoff, and 3 out of 5 for Bernstein).

    These differing degrees of social integration within the four communities were also apparent from responses to the question as to whether a draft or a preprint of the citing article was sent to the cited author.  Overall, only 10.4% of our respondents sent drafts or preprints of their papers to the citing authors. In Scholte's case, however, this was the case for over 26%, while none of those who cited Sips' et al. sent them a preprint of the citing paper. Similarly, while those who cited Scholte (as well as Bernstein) were likely to consult with other researchers about the reference (50% of all cases), those who cited Sips were very unlikely to do so (only one respondent indicated that he consulted with others about the citation).

    In the previous section we have shown that the audience of Sips' paper did not belong to an easily demarcated "specialty group," while the audience of Scholte's paper corresponded most closely to an "ideal type" of such a specialty group. This conclusion receives additional support from the fact that Scholte and his co-authors as well as their other work were known to a very large proportion of those who cited them, that very many of them  cited other work from the Amsterdam laboratory either often or very often, that they were the group most likely to send preprints and to visit each other's laboratories. In other words, not only does Scholte's group appear to belong to a well defined community but the Amsterdam laboratory seems to be well integrated and quite central within this community. In contrast, Sips and his co-authors as well as their work were known the least of all our authors by those who cited them. A substantial number of those who cited Sips' paper never cited any other work of this group; not a single one of our respondents in this case remembered sending a preprint of his paper to Sips' or his co-authors; and none of them reported visiting the Amsterdam laboratory or being visited by one of the authors of Sips' paper. The data on social links between the cited and the citing authors also supports our earlier conclusion that the audience of Westerhoff's research was also a distinct specialist community though its reputational structure was more difficult to characterize than in the case of Scholte. As in the case of Scholte, we find here relatively high levels of social interaction between the citing and the cited authors: Westerhoff and his co-authors as well as their work are well-known, they are just as likely as those who cited Scholte to exchange papers and letters and to have collaborated with those who cited them. Although the respondents in Westerhoff's case are somewhat less likely than in the case of Scholte to exchange laboratory visits, they appear to meet at conferences much more often than any other group (73.7%). Thus, despite our difficulty in establishing a clear reputational hierarchy within this specialty, the group around Westerhoff like that around Scholte (and probably also Bernstein) seems to be socially well integrated into the community of scientists working in this area.


Evaluations of the Amsterdam Laboratory

    Personal acquaintance with the researchers from the Amsterdam laboratory was not correlated  with the evaluation of the quality of this work: there was no difference between the evaluations of those who knew the Amsterdam researchers and those who did not. Scoring on a 7-point Likert-scales for 'quality', 'relevance', 'originality' and 'influence' of research performed in Amsterdam, our respondents evaluated the research done in Amsterdam very highly, particularly in terms of its quality and relevance, though the laboratory also scored high on influence and originality.



            +3       +2        +1        0         -1   -2 or -3  N

quality     46.9%     34.7%     12.2%     6.1%      0%       0%   49

relevance   44.0      28.0      22.0      4.0      2.0       0    50

influence  22.2      37.8      31.1      8.9      0        0    45

originality 20.4      42.9      24.5      10.2      2.0       0    49


For quality: +3=excellent, +2 very good, +1 good, 0 average, -1 below  average, -2 bad, -3 very bad; for relevance, originality and influence the scores are: +3 very high, +2 high, +1 quite high, 0 medium, -1 quite low, -2 low, -3 very low.



    (Kendall's Tau-B) correlations among these measures are highly significant (all p <.01), and rank from 0.77 between 'influence' and 'originality' to 0.40 between 'quality' and 'relevance.'

    However, if we disaggregate these results according to the four cited papers, we find again significant differences.  Although the ratings cluster on the positive side of the scales, differences in the evaluation on the four scales are now also significant. Furthermore, the correlations between the indicators sometimes disappear at sub-group level.

    While the work of Sips and Scholte and their co-authors is rated very highly in terms of its quality, the work of Westerhoff is rated highest in terms of its relevance. Because of its low rating on 'influence' and its high rating on 'relevance' the correlation between these variables disappears in the Westerhoff case (T= .21213; p = .1773), and correspondingly, the correlations between 'influence' and 'quality' in the case of Sips, and the one between 'relevance' and 'quality' in Scholte's case are now smaller than .3 and not significant (p >> 0.05). (In the case of Bernstein we again have a rather low rate of response, but the few respondents who did evaluate it, judged the Amsterdam research in this area lower than was the case for the other groups of citing authors.)  

    Although the disappearance of correlations is enhanced by the smaller group sizes when the cited authors are assessed separately, since some of the other correlations are not affected-- for example, 'quality' and 'originality' correlate for all groups significantly-- these results indicate that there is underlying heterogeneity among the groups of respondents. The convergence at the aggregate level is to some extent an artifact of the grouping of mostly positively assessed papers.  The next question therefore, is: what makes the reception of the work of these four research groups so different?


Reception of the Paper and Its Impact

    Using discriminant analysis, with the 'cited reference' as the grouping variable and the other variables as discriminating,[33] group-membership is correctly predicted in 104 out of the 107 cases (97.2%). Classification is 100% correct for the responses by authors citing Sips or Scholte; in the case of citations to Bernstein's paper only 1 of the 14 cases is misclassified (as a citation to Scholte), and in Westerhoff's case 2 of the 38 cases are misclassified (see Table 5).


Table 5

Classification Results using Discriminant Analysis



                  No. of    Predicted Group Membership

    Actual Group  Cases         1         2         3         4

________________  ______    ______    ______   ______    ______


Group        1       38        36         0         1         1



Group        2       31         0        31         0         0



Group        3       24         0         0        24         0



Group        4       14         0         0         1        13



          N =       107



Percent of "grouped" cases correctly classified:  97.20%




    This result suggests, that the cited reference is a major source of variance.  For 28 of the 72 variables (39%) which can be relevantly assessed in this way, the chi-square of the relation with the cited article is significant.  Among these questions are most of the cognitevely most interesting questions of the survey, such as, whether the reading of the cited article "affected the way you did your research" (question 7A), or "modified the argument you wanted to make" (7D); whether one had cited the article for its theoretical contribution or its empirical data (8A and 8C); almost all of the reasons for citing given in question 9 (only, "it was a contribution to the discussion" was not discriminating, since it is high on all cases), whether experiments were repeated (10); whether the citing author still considered the article as important today or not (24); etc.  On the other hand, differences were not significant when questions focussed on how the citing author proceeded with the text of the cited article: for example, whether (s)he had read the entire article or only the abstract, whether a draft or preprint was sent to the cited authors (only exceptionally), and whether the citing author thought that the cited authors should have noticed his/her work for their own research (almost always).[34]

    What then did the scientists mention as their cognitive reasons for citing these articles? In question 9, we asked the citing authors whether they cited the paper because it offered them a useful comparison, provided support for their conclusions, allowed them to broaden their claim or argument, was a contribution to the discussion, etc. Since we wanted to be able to compare our ratings of the uses of these texts in arguments to the respondents cognitive reasons for citing them, the range of options we have offered in the questionnaire corresponded to our classification of textual functions. Generally speaking we distinguished among citing the paper as a means of contextualization, referring to it for agenda building purposes or citing it as a warrant for a particular conclusion.

    Since the reasons why someone cites do not necessarily have to correspond to the use of the citation in the text, the comparison we performed cannot be regarded as a kind of 'validation' of our previous study, or vice versa.  What we were interested in instead was to what extent the author's motives and reasons for citing correspond to the functions of citations in the textual discourse.

    As may be expected, we found both similarities and differences.

    First, it appeared that the citing authors themselves were far more likely than we had expected to mark the relatively diffuse "contribution to the discussion," which we had intended only for those cases in which the citation was chosen merely for the demarcation of the context to which the citing paper properly belongs.  This was however, clearly not the interpretation given to this phrase by most of the respondents.  Many of them claimed to have cited the paper as a "contribution to a discussion," even when in the text of their articles they very clearly used a specific bit of information from that paper as a warrant which allowed them to draw a conclusion and advance a specific claim (such as using the molecular weight in order to identify a protein).[35]  Such disparities between the motives for citing the paper and the specific use of the cited statement in the argument of the citing paper occurred not only when our respondents chose "contribution to the discussion" as a reason for citing, but also in other cases. For example, a paper could be said to be cited because "it gave a summary of a certain kind of argument or position," while in the text the cited claim was clearly used to legitimize an interest in a particular problem.[36]  It appears that in identifying their reasons for citing a particular paper, the citing authors were more likely to focus on the overall importance of that paper for the area in question and the kind of significance it had for them in general, rather than on their specific use of the claims made in the cited article in the arguments presented in their own papers.

    However, the distributions of the answers to options other than "contribution to the discussion" in question 9 are all significantly related to the "cited reference" involved, and this suggests specificity. If we leave the one not-discriminating option out of consideration, we are able to construct Figure 1A, which can be compared to Figure 1B from our previous study. The agreement between the results of our analysis of textual use of the citations and the reasons for citing reported by the citing authors is, however, qualitative rather than quantitative: for example, in both cases citations to Westerhoff's paper are often categorized as a summary or background ('context') and those to Sips' paper are mostly classified as 'warrants.' However, there are important differences as well:[37] the citing authors rarely regard either Sips' or Bernstein's papers as setting a research agenda, while the textual analysis suggests that they have been often cited for raising research questions. Similarly, while textually Scholte's paper was often cited as a warrant for specific conclusions, the citing scientists claimed to have cited it for this reason only relatively rarely. Moreover, if we pursue the analysis not at the level of the four groups but on a case by case basis, the results of the two types of analysis do not correspond.[38]

    More detailed statistical analysis suggests that the major reason for the differences is related to more complex patterns of interaction with other "cognitive" variables. For example, answers to question 7A-- "whether the cited article affected the way you did your experiments"-- are strongly correlated with responses to some of the options in question 9. Those who claimed that the cited article affected the way they performed their experiments were also likely to claim that the article served as a warrant or a critical agenda building device. Apparently, varying relations between papers and research practices underly the differences in scientists perception of why they cite a particular paper. Textual analysis alone is obviously not a good source of information about research practices and the significance of scientific articles in these practices.

    The cited paper was rarely reported to affect the conduct of experiments. Only Sips's and Bernstein's papers were said to have affected experiments in a substantial number of cases (29% and 21%, respectively; higly significant). The cited papers were not believed to have had a great impact on the focus of the argument made in the citing paper: only 12.1% claimed that the paper led them to modify the arguments they wanted to make and only 1.9% believed that it changed the emphasis of their own papers.  A more substantial number of the citing authors claimed that the paper they cited affected the interpretation of their own results (26.2%; not significantly different for the four groups). A large proportion of the respondents to this question (which aimed at distinguishing the influence of the cited papers on the construction of the discursive argument from their impact on the research process) checked the option "other", and many of them took the opportunity to comment. The comments did not, however, refer to the impact of the article on either research or interpretation (one respondent specifically said that the cited paper "had no effect on research or interpretation") but rather to its use in the citing paper.  For example, many respondents referred to the fact that the article  served as a support or confirmation of some point the author wanted to make, or that it reinforced his or her conclusions (comments read for example: "it substantiated our interpretation of the data," it was a "good confirmation of my supposition", "it supported a point" or it "confirmed our results"). A few respondents claimed that the cited article provided a context or a background for the paper using it ("it served as a reference for past work" or it "served to define the context in which our article should be viewed, " or "contributed material for review" or "reference for general discussion" or it was "used to identify a certain school of thought"). 

     The absence of a direct impact of the cited papers on the research process-- as asked in question 7A-- in two of the four cases coincides with a low replication rate reported by our respondents, in answer to question 10. Only 10 of our respondents have attempted a replication of any of the experiments reported in the papers we studied (1 for Westerhoff, 2 for Scholte, 3 for Sips and 4 (i.e., 67%) for Bernstein).[39]  Moreover, by far not all the attempts at replication were successful: all of those who attempted to replicate Bernstein's experiments did succeed, while in the other cases there were some who failed or were not sure whether the replication was a success or not.

    In summary:

1.Despite the fact that our own textual analysis did not agree with the reasons for citing provided by our respondents, both types of analysis demonstrated that the perception of the significance of the four papers and their textual uses in argumentation were distinct and directly related to the manner in which the claims made in the original paper were given significance by being integrated into specific evidential contexts.

2.Although both our textual analysis and the analysis of the reported reasons for citing clearly reveal that the four papers were perceived as having a different kind of significance, it is a mistake to treat textual functions of citations in arguments as equivalent to reasons for citing.[40]  Reported motives and subjective reasons of authors for citing should be distinguished from the various cognitive functions which citations play in argumentation presented in scientific articles.

We will return to these points in the conclusions.


Evaluations of the Cited Paper

    Just as the differences in the reasons for citing the four papers could not be reflected in the number of citations these papers received, so also such citation counts do not reflect the citing authors' judgements about the validity and importance of these papers when they were cited and today.  The papers we have selected were cited relatively frequently, were the claims made in them judged to be correct? How well did they withstand the test of time? Did they continue to be important?  Would they be still cited today?

    We asked our respondents whether they believed the claims to be correct, in need of modification, or incorrect. Although only two respondents believed that the claims made in any one of the papers were simply wrong, there were significant differences in the evalutation of the four papers in this respect. Thus, while the paper by Westerhoff et al. was the most heavily cited of all the papers we studied, only a minority (36.8%) believed that the claims made in this paper could stand in their original form, 52.6% believed that they needed modification, and 10.5% believed they were incorrect. The papers of Bernstein et al. and Scholte et al., despite the fact that they were cited somewhat less often, were regarded as correct by all respondents and although nobody claimed that the paper by Sips et al. was incorrect, 20% believed that the claims made in this paper needed some modification, today.

    Moreover, it appears that most of those who believed the cited paper to be correct (and thus the majority of our respondents) also continued to regard it as important and believed that it would also be cited today. Assessment of importance is indeed significantly related to the estimated likelihood that other scientists writing on a similar topic would be likely to cite the article today. However, we found no significant relation between whether the knowldege claim is thought to be correct today, and whether it is still recommended for citation. For example, although only 36.8% of those who cited Westerhoff's article believed that the claims made there were correct without any need for modification, as many as 65% believed it to be important even in hindsight, but only 43.8%-- not necessarily the same cases-- would recommend citation today. In Bernstein's case, on the other hand, although everyone believed his paper to have been correct, a majority of our respondents were doubtful whether it had been important in hindsight, and only 50% thought it being worth a citation today.

    Of course, it should not be surprising that an important paper might need modification, or that a paper whose claims are believed to be correct would no longer be considered new enough to be cited or even that some would continue to cite papers which they believe needed modification, but just like the fact that different papers are being used for different purposes when they are being cited, so also these complicated relations between importance, validity and citing behavior suggest that any interpretation of simple citation counts as indicators of impact, quality or importance must be considered problematic.



    In the introduction to this paper, we developed a scheme of citing versus cited and authors versus texts.  We argued that the various dimensions of citations, which can be distinguished on that basis, do not have necessarily to coincide, or even to exhibit congruence.  Indeed, we have shown in our limited sample that relations, which are created by citations, are more complex.

    The first conclusion which our material allows us to draw is about the use of citations in science studies as indicators of institutional performance. Although the four articles whose citations we studied were written by scientists working within a single laboratory of biochemistry at one university, although all of them belonged to the same general area of bioenergetics and transport, and each was cited at approximately the same number of times, we found systematic differences in the structure of the communities to which these papers were of relevance, in the social relations between the citing and the cited authors, in the perceptions of each paper's importance and in the reasons scientists gave for citing them. Though we were systematically comparing "like with like" we found nothing but differences.

    At the same time, these differences could be related systematically to the contribution which the cited paper made to a specifically structured problem area on which a group of scientist have been working.[41] Some of these groups formed rather clear specialty communities with definite cognitive and social identities, high levels of interaction among its members and extensive agreement on the reputational structure within the specialty; others were more heterogeneous, their identity was far less clearly defined, or they appeared to have been substantially less integrated and in some there was also less agreement on the reputational structure. Obviously, a reception of a scientific paper will be influenced in some measure by such differences in the structure of the communities to which a paper is of relevance.

    The recognition of such differences between different kinds of fields was the main reason for the insistence of many citation analysts that citation rates can be meaningfully compared only among groups working in the same areas. Our analysis suggests, however, that the criterion of what constitutes sufficiently similar groups is itself problematic: despite the fact that the four papers we compared originated from the same small laboratory, they were clearly addressed to four distinctly structured communities. The local institutional identity of the papers' origins is not sufficient to assure the similarity of their audiences nor does it provide clues for explaining the papers perceived significance and reception.

    A second question in the sociology of science, which we may address with some of our material, is about how to conceptualize the organization of research at the level of scientific fields.  It has been contended, that this organization should be analyzed as a reputationally controlled work organization, organized at the level of scientific communities.[42]  Again, we did not find one scientific community of biochemists with a shared pattern of work organization, but a variety of social structures at a lower level than that of the specialty involved ("bio-energetics"). While one or two of these communities were indeed firmly integrated and hence, probably reputationally controlled, such integration and reputational hierarchy are more problematic in the other cases.  In the case of the authors citing Westerhoff, we notice a state of continuous crisis at the research front, which is reflected in a lack of clear reputational hierarchy in this area.  In the Sips case, we found an instance of a knowledge contribution which indicated interdependencies among researchers who did not work in precisely the same research area, and therefore, according to several indicators could not be considered as members of the same specialty group. We think that in this case the paper's significance is not even mediated by organization at the level of communities.

    Therefore, we would like to speculate, that rather than the social organization of science explaining the cognitive organization, that the social organization is part of and maybe even constituted by the specific problems which are addressed at the research front.  When research problems are stable-- one might say "paradigmatic"-- they give rise to stable communities, which sometimes manage to agree on standards and past performance in the area.  However, in other cases, contributions can be acknowledged almost purely for cognitive reasons.[43]

    The results of our questionnaire also suggest that the impact of a cited paper on the conduct of research or on the interpretation of results is often slight, that it differs from case to case, and is not directly and unequivocally related to how the citing authors evaluate the results of the paper they cite.  The citing authors appeared to cite often for rather diffuse reasons (such as "it was a contribution to the discussion"), even if the sentence that was actually cited was fairly specific and played a definite role in the argumentation of the citing paper. This difference between the scientists' reasons for citing and the textual role of the cited text in the argumentation of the citing paper is an important analytical distinction which cannot be ignored in the analysis of citation behavior on the one hand and of the cognitive role of citations in scientific literature on the other.[44] 

    Our results suggest that while papers are perceived as having different kinds of significance, these perceptions do not correspond directly to the functions that the cited texts play in the argumentation of citing texts.  Scientists' appear to regard the papers they cited as having a particular rather generalized significance, to treat them in a sense as "concept symbols"[45] of particular types of contribution.  These perceptions are distinct for different papers and reflect the scientist's perception of the field's cognitive organization, its research agenda and background. These perceptions of the paper, however, did not correspond clearly to distinct textual uses to which the papers were put when they were cited. The disjunction between these two levels of analysis--of the cognitive organization of the field as evidenced by textual analysis and of the actors' perceptions of this organization--has important methodological consequences not only for the use of citations as indicators for quality or impact of scientific publications but also for various approaches adopted in sociological analyses of science.

    Finally, in relation to citation analysis as an instrument for science policy, we are able to narrow the formulation of our main conclusions down to the following points: first, whether citations are given to a scientific contribution seems dependent in the first instance on whether citing authors can use the reference in their texts.[46]  Whether they in turn are able to do so, is dependent on the current development of the field, and hence, citations are not an indicator of the quality of cited papers at the moment when they were published.  Secondly, when using citations as indicators of social structure (reputational hierarchies, group networks, etc.), one is not always allowed to infer from relations among (aggregates of) documents to attributes of authors or institutions.[47]






Amsterdamska, O., and L. Leydesdorff, 1989. Citations: Indicators of Significance.   Scientometrics 15 (nrs. 5/6): 449-471.


Bernstein, R. L., C. Rossier, R. van Driel, M. Brunner, and G. Gerisch, 1981. Folate Deaminase and Cyclic-AMP Phosphodiesterase in Dictyostelium Discoideum: Their Regulation by Extracellular Cyclic-AMP and Folic Acid.  Cell Differentiation 10: 79.


Brooks, T. R., 1985. Private Acts and Public Objects: An Investigation of Citer Motivations.  JASIS 35: 223-229.


Brooks, T.R., 1986. Evidence of Complex Citer Motivations.  JASIS 37: 34-36.


Burt, R. 1982. Toward a Structural Theory of Action.  London: Academic Press.


Chubin, D. E., and S. Moitra, 1975. Content Analysis of References: Adjunct of Alternative to Citation Counting.  Social Studies of Science 5: 423-41.


Collins, H. M. 1985. The Possibilities of Science Policy.  Social Studies of Science 15: 554-558.


Cozzens, S. E. 1981.  Taking the measure of Science: A review of citation theories.  Newsletter of the International Society for the Sociology of Knowledge 8: 16.


Cozzens, S. E. 1985. Comparing the Sciences: Citation Context Analysis of Papers from Neuropharmacology and the Sociology of Science.  Social Studies of Science 15: 127-153.


Cozzens, S. E. 1989.  What Do Citations Count? The Rhetoric-First Model.  Scientometrics 15: 437-47.


Cronin, B. 1981. The Need for a Theory of Citation.  Journal of Documentation 37: 16-24.


Cronin, B. 1984.  The Citation Process.  London: Taylor Graham.


Garfield, E. 1979. Citation Indexing.  New York, etc.: Wiley.


Gilbert, G. N. 1977. Referencing as Persuasion.  Social Studies of Science 7: 113-122.


Gilbert, G. N., and M. Mulkay, 1984. Opening Pandora's Box. A sociological analysis of scientists' dis­course.  Cambridge: Cambridge University Press.


Gilbert, E. S. 1968. On discrimination using qualitative variables. Journal of the American Statistical Association 63: 1399-1412.


Hagstrom, W. O. 1970.  Factors Related to the Use of Different Modes of Publishing Research in Four Scientific Fields. In Communication Among Scientists and Engineers, edited by C.E. Nelson and D.K. Pollock. Lexington, Mass.: Heath Lexington, pp. 85-124.


Hayes, M. R., and McGivan, 1982. Differential Effects of Starvation on Alanine and Glutamine Transport in Isolated Rat Hepatocytes. Biochemical Journal 204: 365-368.


Kaplan, N. 1965. The Norms of Citation Behavior: Prolegomena to the Footnote.  American Documentation 16: 179-184.


Latour, B., and S. Woolgar, 1979. Laboratory Life.  Beverly Hills, etc.: Sage.


Leydesdorff, L. 1987a. Various Methods for the Mapping of Science. Scientometrics 11: 295-324.


Leydesdorff, L. 1987b. Towards a Theory of Citation.  Scientometrics 12: 305-309.


Leydesdorff, L. 1989. The Relations Between Qualitative Theory and Scientometric Methods in S&T-Studies. Introduction to the Topical Issue.  Scientometrics 15: 333-347


MacRoberts, M. H., and B. R. MacRoberts, 1986. Quantitative Measures of Communication in Science: A Study of the Formal Level.  Social Studies of Science 16: 151-172.


MacRoberts, M. H., and B. R. MacRoberts, 1987a. Another Test of the Normative Theory of Citing.  Journal of the American Society for Information Science 38: 305.


MacRoberts, M. H., and B. R. MacRoberts, 1987b. Testing the Ortega Hypothesis: Facts and Artifacts.  Scientometrics 12: 293-5.


Martin, B. R., and J. Irvine, 1985. Evaluating the Evaluators: A Reply to Critics.  Social Studies of Science 15: 558-575.


McCain, K. W., 1986. Cocited Author Mapping as a Valid Representation of Intellectual Structure.  JASIS 37: 111-122.


Moed, H. 1989. Bibliometric Measurement of Research Performance and Price's Theory of Differences Among the Sciences.  Scientometrics 15: 473-83.


Moore, D. H. 1973. Evaluation of five discrimination procedures for binary variables.  Journal of the American Statistical Association 68: 399.


Moravcsik, M. J., and P. Murugesan, 1975. Some Results on the Function and Quality of Citations.  Social Studies of Science 5: 86-92.


Mulkay, M., J. Potter, and S. Yearley, 1983. Why an Analysis of Scientific Discourse is Needed. In Science Observed, edited by K. D. Knorr-Cetina, M. Mulkay, 171-203.  London: Sage.


Mullins, N. C., L. L. Hargens, P. K. Hecht, and E. L. Kick, 1977.  Group Structure of Co-Citation Clusters: a Comparative Study.  American Sociological Review 42: 552-562.


Murray, S. O., and R. C. Poolman, 1982. Strong Ties and Scientific Literature.  Social Networks 4: 233-42.


Narin, F. 1976. Evaluative Bibliometrics. Cherry Hill: CHI.


Price, D. J. de Solla, 1970. Citation Measures of Hard Science, Soft Science, Technology and Nonscience. In Communication Among Scientists and Engineers, edited by C. E. Nelson and D. K. Pollock. Lexington, Mass.: Heath Lexington, 3-22.


Scholte, B. J., A. R. Schuitema, and P. W. Postma, 1981. Isolation of IIIGlc of the Phosphoenolpyruvate-dependent Glucose Phosphotransferase System of Salmonella Typhimurium.  Journal of Bacteriology 148: 257.


Sips, H. J., A. K. Groen, and J. M. Tager, 1980. Plasma Membrane Transport of Alanine is Rate Limiting for its Metabolism in Rat-Liver Parenchymal Cells.  FEBS Letters 119: 271.


Small, H. G. 1978. Cited documents as concept symbols.  Social Studies of Science 8: 327-340.


Small, H., and E. Sweeney, 1985. Clustering the Science Citation Index Using Co-citations I: A Comparison of Methods.  Scientometrics 7: 391-409.


Small, H., E. Sweeney, and E. Greenley, 1985.  Clustering the Science Citation Index using Co-citations II: Mapping Science.  Scientometrics 8: 321-340.


SPSS 1976. SPSS/PC+ Advanced Statistics.  Chicago: SPSS Inc.


Studer, K. E., and D. E. Chubin, 1980. The Cancer Mission. Social Contexts of Biomedical Research.  Beverly Hills, etc.: Sage.


Todorov, R., and W. Glänzel, 1988. Journal citation measures: a concise review.  Journal of Information Science 14: 47-56.


Vinkler, P. 1987. A Quasi-Quantitave Citation Model.  Scientometrics 12: 47-72.


Waygood, E. B., R. L. Mattoo, and K. G. Peri, 1984. Phosophoproteins and the Phosphoenolpyruvate:Sugar Phosophotransferase System in Salmonella typhimurium and Escherichia coli: Evidence for III Manose, III Fructose, III Glucitol and the Phosphorylation of Enzyme II mannitol and Enzyme II N-Acetylglucosamine.  Journal of Cellular Biochemistry 25: 139-159.


Westbrook, J. H., 1960. Identifying Significant Research.  Science 132: 1229-34.


Westerhoff, H. V., A. L. M. Simonetti, and K. Van Dam, 1981. The Hypothesis of Localized Chemiosmosis is Unsatisfactory.  Biochemical Journal 200: 193.


Whitley, R. D. 1984.  The Intellectual and Social Organization of the Sciences.  Oxford: Oxford University Press.




[1]. Rotating first authorship; no order of seniority implied. The authors want to thank Prof. Karel Van Dam for his support in testing the questionnaire, and his letter of recommendations, and Hubert-Jan Alberts for his contribution to organizing and analysing the data.

[2]. Amsterdamska and Leydesdorff (1989).

[3]. Small and Sweeney (1985).

[4]. Narin (1986), Leydesdorff (1987a), Todorov and Glänzel (1988).

[5]. Hagstrom (1970), Price (1970).

[6]. Garfield (1979).

[7]. Cozzens (1985).

[8]. [8]. Moed (1989).

[9]. Martin and Irvine (1985).

[10]. MacRoberts and MacRoberts (1986, 1987a, and 1987b).

[11]. "Behind the practical achievements (of citation analysis) lies an unresolved epistemological question, which is responsible for the hairline cracks in the intellectual superstructure."  Cronin (1981).

[12]. Cozzens (1981).

[13]. Kaplan (1965).

[14]. Small (1978).

[15]. Gilbert (1977).

[16]. Moravcsik and Murugesan (1975), Chubin and Moitra (1975), Amsterdamska and Leydesdorff (1989).

[17]. Leydesdorff (1989).

[18]. Cozzens (1989).

[19]. Note, that at the structural level, aggregates of citations can correspondingly be used for various forms of mapping:


                      citing groups/institutes  citing document sets


cited groups or     sociometric networks     hierarchies           

     institutes   ├─────────────────────────┼─────────────────────────┤

cited document      e.g., co-citation maps   e.g., journal mapping 

          sets    └─────────────────────────┴─────────────────────────┘


     The question of how individual citation behavior has to be conceptualized in relation to networks of citations as indicators of structure, merits a separate more methodologically oriented study. See also: Burt (1982).

[20]. The substantive validation of scientometric constructs is, of course, in itself an important subject in science studies. However, we have not found systematic attempts in the literature to evaluate how and why a particular citation relation plays a role in the four distinct dimensions, i.e., as relations between cited and citing author, and cited and citing text.

    This lacuna was also noted by Cronin (1981, 21). In information retrieval, citations have been extensively reviewed as carriers of information for practicing scientists bij Cronin (1984). The focus in these studies has mostly been on the reasons for citing authors to select references. (The first study to our knowledge is by Westbrook (1960); see for an extensive recent study: Vinkler (1987). The emphasis on "practicing scientists", i.e., citing authors, is also fundamental to co-citation maps. See a.o.: Small and Sweeney (1985).) In addition, research has focussed on the validation of co-citation studies, and of other classificational schemes (e.g., Mullins et al. (1977).) Bibliometric links by citations are the core of a set of studies, which are usually labelled  'citation context analysis.' (Moravcsik and Murugesan (1975), Chubin and Moitra (1975), Latour and Woolgar (1979), Cozzens (1985).) Sociometric links between cited and citing researchers have been studied by Murray and Poolman (1982).

[21]. Amsterdamska and Leydesdorff (1989).

[22]. For an earlier sociological study of this debate see Gilbert and Mulkay (1984).

[23]. The four articles are: Sips et al. (1980), Westerhoff et al. (1981), Scholte et al. (1981), and Bernstein et al. (1981).

[24]. This happened in 33 cases, and in four more cases we thought it appropriate to handle the response in this way. There were 19 partial responses on Westerhoff et al.'s paper (50%), and 8 (out of the total of 14) partial responses from authors citing Bernstein's paper.

[25]. In one case we received notice of the death of the respondent; in several cases, the questionnaire was returned as undeliverable. One Dutch respondent, who had cited the Bernstein-article five times, angrily refused to respond. In one case, an Italian respondent added to the margin of his questionnaire: "Despite the letter of recommendation by Prof. Van Dam, this has to stop!"

[26]. 'Transport,' 'molecular biology,' 'bacterial phosphotransferase systems' and 'sugar uptake systems' were each mentioned once.

[27]. Whitley (1984).

[28]. Studer and Chubin (1980).

[29]. The original Amsterdam laboratory, mentioned often by the respondents could be considered an exception, but even here the names of the principal researchers were different: Van Dam and/or Westerhoff mentioned in the case of Westerhoff's paper; Postma in the case of Scholte's paper, and Sips and/or Tager in the case of Sips' paper.

[30]. This high degree of consensus becomes even more pronounced if we disregard the answer of a single scientist who listed nine groups instead of the three we asked for: In that case the number of groups mentioned is reduced to 10, of which 2 are mentioned twice, and four more than twice.

[31]. It was also this group of respondents that felt least sure as to whether the cited authors would notice the citing paper for their own research or cite it in their work (whereas 100% of the repondents in the case of Bernstein and Scholte felt that the authors they cited should have noticed the citing paper for their own research, only 74% of the respondents who cited Sips expected this kind of reciprocity).

[32]. Both the literature in the field and interviews we conducted suggest that bioenergetics is indeed in a state of continuous crisis and that the lack of consensus on what are the most important groups might be a result of cognitive disagreements within bioenergetics: although the chemiosmotic theory proposed by P. Mitchell serves as a framework for most of the work which is being done in the area, there are a number of alternative interpretations of this theory which are being pursued and the scientists involved adopt different and not always consistent experimental approaches and formal interpretative methods.

[33]. Although discriminant analysis is only mathematically defined for interval variables, in the case of dichotomous variables, most evidence suggests that the linear discriminant function may perform reasonably well. See a.o. Gilbert (1968), Moore (1973), SPSS (1976).

[34]. For example, if we use only as discriminating variables only the replies to the first nine questions, i.e., the ones which we asked respondents to reply for each case separately because of their cognitive character, the percentage of correctly classified cases is already 81.25% (+_ 3%).

[35]. For example, in Waygood et al. (1984, 144). The reference to Scholte's paper appears in the subsection entitled "Molecular Weight Assessment" and reads: "Molecular weight standarization of the frozen gel was made using enzyme...III glc, 20K (ref. to Scholte, and another paper reporting the weight) phosphorylated standard." In the text, the reference obviously serves the specific function of validating the identification of a protein as enzyme IIIglc, and thus as a warrant for a conclusion "this is IIIglc," whereas at least one of the respondents in this case saw its function to be "contribution to the discussion."

[36]. In one of the papers citing Sips (Hayes and McGivan (1982, 365)), the reference to Sips occurs in an introduction and clearly serves to establish the importance of the problem which the paper will address: " It has subsequently been shown that the transport of alanine into hepatocytes can limit alanine transport under certain conditions (Sips et al., 1980), so it is important that the effect of stravation on alanine transport be quantified." Despite the fact that the paper is in part an attempt to provide just such a quantification and so the citation of Sips' claim seems to serve as a legitimation of the problematics raised in the text, one of the authors of the article claimed to have cited that statement in this paper not because it "raised important questions" but becuase "it gave a summary of a certain sort of argument or position."

[37]. The underlying matrices for the two pictures correlate not significantly .48.

[38]. It is not possible to classify a majority of the cases using only the answers to questions 9A to 9F as discriminating variables in a discriminant analysis: when the Scholte-paper is the grouping variable 75% of the cases are misclassified, against only 29% for the Sips-paper.

[39]. Differences are significant at the .01 level.

[40]. See also: Mulkay et al. (1983).

[41]. See also: Collins (1985).

[42]. Whitley (1984).

[43]. See also: Vinkler (1987), who estimates from his data that 81% of the citations are given for exclusively professional reasons, 2% for exclusively connectional reasons, and 17% would be a result of interaction between the two factors.

[44]. Leydesdorff (1987), Cozzens (1989).

[45]. Small (1978).

[46]. Using citations as an indicator for citing practices is in accordance with the philosophy behind co-citation analysis. See also: Small, Sweeney and Greenley (1985).

[47]. MacRoberts and MacRoberts (1986 and 1987).