Citations and usage

In the current research assessment system in natural and social sciences the most used quality measurement indicators are citations. Open access can increase the dissemination of research and allows for the development of new metrics

Background

In general, research is assessed by measuring the impacts of the research articles that describe the research. There are a number of ways in which research article impact can be measured, but the dominant approach within the science, technical and medical fields has been to use the journal impact factor.   

The Impact Factor is an average measure of the citations to a journal, calculated for the journals indexed in the Web of Science and published annually in Journal Citation Reports; both of these are subscription products owned by Thomson Reuters

The original idea behind ISI and the impact factors was to provide background information for academic libraries to guide them in the decisions as to which journals they should subscribe to (given that the budgets for paper based journals were limited). Today this function is rather minor since most libraries purchase huge packages – the so called Big Deals – from the major publishers which inevitably include a lot of journals not indexed by ISI.

When researchers started to get web access to Web of Science via their universities the ability to search on topics, authors and tracking citations became a useful function of ISI.

A vicious circle

Over the years funders, ministries and universities have started to use impact factors as a proxy for the scientific quality of the output of universities, research groups and individual researchers. This approach to research evaluation basically dictates researchers to publish in high impact factor journals – thus journals that appear in ISI.  Because most Open Access journals are relatively new and have not had time to build up prestige and impact factors (with some notable exceptions), authors are therefore also forced to publish mostly in established subscription journals, which limits the access and the reuse of scholarly content. Thus the circle is closed; research evaluation and the current dominant position of toll-access publishers are related and keep each other in place.

 To break open this circle SPARC Europe supports Open Access Publishers (united in OASPA) and projects & studies that aim to create alternative research evaluation systems.

Some good alternatives are already there

Article metrics – PLoS

Besides SCOPUS (an Elsevier product) and Google Scholar (A Google product), PLoS has developed an alternative SPARC Europe considers to be a best practice for Open Access:  

Publishers can use the opportunity to focus at the article level and bring in a range of easily measurable metrics, so that articles can be assessed on their own metrics rather than on the basis of the journal (and its impact factor) in which they are published.  

PLoS has been pushing ahead with this idea in their article-level metrics initiative (http://article-level-metrics.plos.org/), but they are not the only publisher doing this – (see also http://www.jmir.org/stats/overview?sort=views) .   

For all journals but particularly for young journals without an impact factor,  the use of article level metrics in this way can be a great service to authors, because it will demonstrate the kinds of impact an article is having, and why publishing with that journal is a good idea

MESUR

MESUR has collected multiple years of very large- scale scholarly usage data to analyze patterns of scholarly activity and perform a comprehensive survey of possible impact metrics. The usage data were obtained from some of the world’s most significant publishers, institutional consortia and aggregators in the period 2006-2008, although collection has continued and is projected to extend the range of MESUR’s data from 2006 to the present. At this point the MESUR database contains more than 1 billion usage events, pertaining to nearly 50 million documents and about 100,000 identifiable serials.

The analysis resulted in a map of science that highlights the relations between a multitude of scientific domains according to large- scale patterns of user clickstreams [1] and a map of metrics that shows the diversity and variety possible impact metrics [2]. The latter map of metrics, based on the statistical technique of principal component analysis, shows that present citation-based metrics represent only one of many possible facets of the general notion of scholarly impact.

The outcomes of the MESUR project suggest the feasibility of impact assessment systems that can complement present citation analysis to address some of its potential biases, such as its focus on a particular type of publication, significant publication delays and domain-specific citation and publication practices.

http://www.mesur.org/

SPARC Global