Homepage  |  Publications  |   Software  |   Courseware; indicators  |   Animation  |   Geo  |   Search website (Google)

click here to download program

FullText.exe for Full Text Analysis


FullText.exe is freely available for academic usage. The program generates a word-occurrence matrix, a co-occurrence matrix, and a normalized co-occurrence matrix from a set of text files and a word list. The output files can be read into standard software (like SPSS, Ucinet/Pajek, etc.) for the statistical analysis and the visualization. A version adapted for the Korean character set is available at http://www.leydesdorff.net/krkwic . A similar program using lines (e.g., titles) instead of long texts can be retrieved at http://www.leydesdorff.net/software/ti/index.htm .


input files


The program prompts for two informations, notably, (a) the name of the file <words.txt> that contains the words (as variables) to be analyzed in ASCII format and (b) the number of files that contain the text elements as cases. The text elements are to be numbered sequentially like Text1.txt, Text2.txt, etc. The number of texts is unlimited; the size of the texts is also unlimited. The number of words is limited to 1024, but keep in mind that many programs (e.g., Excel) will not allow you to handle more than 256 variables in the follow-up. The words have to be on separate lines which are ended with a hard character return and line feed. (Save in Word as plain text with CR/LF or use a DOS utility for saving the file. Make sure that lines are no longer than 100 characters each.)


       If the texts are less than 999 characters (e.g., titles), one can use ti.exe instead.Ti.exe can handle an unlimited number of lines to a size of 999 characters each.

       One can build a word frequency list with a concordance program. For example, TextSTAT-2 is freeware and online. Alternatively, one can use frqlist.exe. This DOS-based program reads <text.txt> and allows for the specification of a stopword list in <stopword.txt>. The results are provided as uppercase in the file <wrdfrq.txt>.

       Stopword.exe is available for correction of a list with stopwords. Both list—the lists of words and stopwords—have to be available in the same folder. The program does just check the words in their current form (that is, without corrections for plural or for uppercase/lowercase forms).


program file


The program is based on DOS-legacy software from the 1980s (Leydesdorff, 1995). It runs in a MS-Dos Command Box under Windows. The programs and the input files have to be contained in the same directory. The output files are written into this directory as well. Please, note that existing files from a previous run are overwritten by the program. Save output elsewhere if you wish to continue with the materials.


output files


The program produces three output files in dBase IV format. These files can be read into Excel and/or SPSS for further processing. Two files with the extension “.dat” are in DL-format (ASCII) and can be read into Pajek for the visualization (freely available at http://vlado.fmf.uni-lj.si/pub/networks/pajek/ ).


a. coocc.dbf contains a co-occurrence matrix of the words from this same data. This matrix is symmetrical and it contains the words both as variables and as labels in the first field. The main diagonal is set to zero. The number of co-occurrences is equal to the multiplication of occurrences in each of the texts. (The procedure is similar to using the file matrix.dbf as input to the routine “affiliations” in UCINet, but the main diagonal is here set to zero in this matrix.) The file coocc.dat contains this information in the DL-format.


b. cosine.dbf contains a normalized co-occurrence matrix of the words from the same data. Normalization is based on the cosine between the variables conceptualized as vectors (Salton & McGill, 1983). (The procedure is similar to using the file matrix.dbf as input to the corresponding routing in SPSS.) The file cosine.dat contains this information in the Pajek-format. The size of the nodes is equal to the logarithm of the occurrences of the respective word; this feature can be turned on in Pajek.


c. matrix.dbf contains an occurrence matrix of the words in the texts. This matrix is asymmetrical: it contains the word numbers as the variables in the sequence order and the texts as the cases. In other words, each row represents a text in the sequential order of the text numbering, and each column represents a word in the sequential order of the word list (v1, v2, etc.). (It is advisable to sort the word list alphabetically before the analysis.)


An additional file labels.sps is an SPSS syntax file and can be used for labeling the variables with the words within SPSS. The words are counted as frequencies. (The plural “s” is removed before processing.) This file can be imported into SPSS for further analysis. Note that SPSS reads only 255 variables correctly from a .dbf file. If one wishes to use more words, one can read matrix.txt into SPSS using the text wizard and run labels.sps thereafter. 


d. words.dbf contains for all words the following three summations:

  1. A variable named “Residual” which provides the standardized residuals to the chi-square for each of the variables; these are defined for wordi as Zi = (Observedij – Expectedij) / √Expectedin. This value can be used for testing the significance of individual words in the set if the expected value is larger than five (Bornmann & Leydesdorff, 2011, Mogoutov et al., 2008);
  2. A variable named “Obs_Exp” which provides the sum of |Observed – Expected| for the word as a variable summed over the column;
  3. A variable named “ObsExp” which provides the quotient of Obs/Exp for the word as variable summed over the column;
  4. A variable named “TfIdf” which use Salton & McGill’s (1983: 63) TermFrequency-InverseDocumentFrequency measure defined as follows: WEIGHTik = FREQik * [log2 (n) – log2 (DOCFREQk)]. This function assigns a high degree of importance to terms occurring in only a few documents in the collection;
  5. The word frequency within the set.


Corresponding files such as obs_exp.dbf, expected.dbf, and TfIdf.dbf are also generated with information at the cell level.


More advanced options

After running the routines, the program prompts with the question of whether one wishes additionally to run the same routines with observed/expected values. This generates obsexp.dbf (analogous to matrix.dbf), obsexp.txt (analogous to matrix.txt), and coocc_oe.dat and cos_oe.dat, analogous to the above input files for Pajek, but now containing or operating on the observed/expected values instead of the observed ones. Note that answering “y” (yes) doubles the processing time of the original routine; therefore, the default is “n”. The SPSS syntax file labels.sps is not changed.


Similarly, one can use (with the same variable labels) the file tfidf which contains the Tf-Idf values. The expected values are stored in expected.dbf. Obs_exp.dbf contains the signed (!) difference between observed and expected values at the cell level. (These are the (non-standardized) residuals of the chi-square.) The corresponding Pajek files can be generated by replacing the matrix values in cos_oe.dat with, for example, the cosine values of TfIdf.dbf. (Cosine values can be generated in SPSS under Analyze > Correlate > Proximity.) Or one can replace the non-normalized values directly in coocc_oe.dat. Note that the number of cases can be different using the later routine (of obs/exp) because rows with no values other than zero are removed in order to prevent divisions by zero in the computation.


Examples of using these programs can be found in:


-       Loet Leydesdorff, The University-Industry Knowledge Relationship: Analyzing Patents and the Science Base of Technologies, Journal of the American Society for Information Science and Technology (JASIST) 55(11) (2004), 991-1001; <pdf-version>

-      Loet Leydesdorff & Iina Hellsten, Metaphors and Diaphors in Science Communication: Mapping the Case of ‘Stem-Cell Research’, Science Communication 27(1)  (2005), 64-99. <pdf-version>

-      Loet Leydesdorff & Kasper Welbers, The semantic mapping of words and co-words in contexts, Journal of Informetrics (2011; in press); preprint version available at http://arxiv.org/abs/1011.5209.  

click here to download program



Leydesdorff, L. (1995). The Challenge of Scientometrics: The development, measurement, and self-organization of scientific communications. Leiden: DSWO Press, Leiden University; at http://www.upublish.com/books/leydesdorff-sci.htm .

Bornmann, L., & Leydesdorff, L. (2011). Which cities produce excellent papers worldwide more than can be expected? A new mapping approach—using Google Maps—based on statistical significance testing. Preprint available at http://arxiv.org/abs/1103.3216.

Mogoutov, A., Cambrosio, A., Keating, P., & Mustar, P. (2008). Biomedical innovation at the laboratory, clinical and commercial interface: A new method for mapping research projects, publications and patents in the field of microarrays. Journal of Informetrics, 2(4), 341-353.

Salton, G. & M. J. McGill (1983). Introduction to Modern Information Retrieval. Auckland, etc.: McGraw-Hill.


Links to programs for (Porter’s) stemming:



Links to programs for parsing:





php-versions of Porter’s stemmer:





return to home page