Ranking systems

rankingsjournal rankingsjournal directories

Rankings use quantitative methods to indicate research quality. On the basis of different parameters, methods are applied to build up a hierarchy of scientific journals, universities and researchers. Growing political pressure to ensure that money flows to the best research/universities has turned the spotlight on ranking systems.

Scientific journals

Journals are primarily ranked on the basis of citations. In addition to articles to scientific journals being put through a peer-review process to safeguard a certain quality, the individual journals are also measured based on their influence in the research field, their so-called Journal Impact Factor.

The impact factor of a journal is calculated according to the frequency with which an article is cited within a given period. There are two databases – Journal Citation Reports (Thomson Reuters) and SCImago Journal & Country Rank (Scopus-Elsevier) – which provide this type of information. The two databases do not use the same computation method and do not index the same journals. Since different research fields have different publication and citation traditions, there is no point in comparing the impact factor of journals across fields of research.

Journal Citation Reports (JCR) is still estimated to be the database with the most recognised ranking system covering the most recognised scientific journals within technical, scientific and health fields, whereas journals from the social sciences and humanities are not sufficiently covered by the database for it to show a hierarchy of journals in this particular research field. In this database the Journal Impact Factor (JIF) expresses the average number of times articles from a specific journal published either in the two or five preceding years have been cited in the JCR year. The database also uses other computation methods to show the impact of a journal, however JIF is still considered to be the most commonly used in research communities.

The SCImago Journal & Country Rank portal also includes a ranking system of journals, the so-called SJCindicator. The SCImago Journal Rank, or SJR, is based on the algorithms in Google PageRank™ and expresses the average number of weighted citations received in the year in question by articles in a specific journal in the three preceding years. SCImago also has other computation methods to show the impact of a journal. The database offers better coverage of humanist and social science journals than JCR.

Universities

International ranking lists are drawn up for universities. The ranking lists compare the 300 to 500 top universities out of the 17,000 or so in the world on the basis of a wide spectrum of indicators and are used by institutions to compete for the best researchers, students and cooperation partners. The various ranking lists are based on different indicators, such as reputation among other academics, employers, number of citations, number of students per researcher, internationalisation etc.

The two major indicators for most ranking lists are the universities' publication frequency in international journals, which are indexed in the databases Web of Science and Scopus, and the universities' reputation. Reputation is established from the data gathered from researchers and cooperating partners, for example their knowledge about a given university. The most highly recognised ranking lists are QS, Times Higher Education, Academic Ranking of World Universities, Performance Ranking of Scientific Papers for World Universities, Leiden and CHE

Researchers

Researchers are also ranked on the basis of citations or the h-index, which also uses citations as an important element. The h-index was devised by Jorge E. Hirsch in 2005. Seeking a more nuanced picture of the impact of research, the h-index (also known as the Hirsch index) computes both on the basis of number of publications and number of citations. This means that a researcher with many published articles but few citations or a researcher with a few, highly cited articles has a weak h-index. On the other hand, it also means that newcomers/younger researchers have a weak h-index.

Owing to the increasing focus on using quantitative methods to rank researchers, other systems have been devised which take into account other factors; however so far, citations and the h-index are the most widely recognised systems for indicating the impact of a researcher.

The key citation databases are Web of Science and Scopus. In these databases it is possible to obtain a figure for the number of citations received by a researcher, both including and excluding self-citations, and it is also possible to obtain a researcher's h-index. The drawbacks of these two databases are that they by no means index all academic publications, which means that not all researchers feature in them. Furthermore, it can be difficult to establish the h-index of researchers with an ordinary name.

You can use a compact programme called 'Publish or Perish' on Google Scholar data if your query concerns a researcher whose publication is not indexed in Web of Science or Scopus and find citations and h-indexes of researchers in this way.

When you state the h-index of a researcher, it is important to provide the fundamental data, as Web of Science, Scopus and Google Scholar do not index the same publications, the same types of publications or cover the same period of time.


  • 11.07.2013 Redigeret af Ditte Schjødt Svensson
  • 12.12.2012 Oprettet af Gudrun Hansen