Impact Factor - A review

Introduction

Since 1665, when Henry Oldenberg published the world’s first scientific journal, the Philosophical Transactions of the Royal Society, there has been inexorable growth in the number of scientific journals and number of articles published. With the availability of such large quantities of information, it became necessary to find ways through the literature to the most relevant and important documents and Journals. The impact factor (IF) of an academic journal is a measure reflecting the average number of citations to recent articles published in the journal. It is frequently used as a proxy for the relative importance of a journal within its field, with journals with higher impact factors deemed to be more important than those with lower ones. The impact factor was devised by Eugene Garfield [1,2], the founder of the Institute for Scientific Information. Impact factors are calculated yearly for those journals that are indexed in the Journal Citation Reports (JCR).

Definition

: The journal Impact Factor for a year is the average number of times articles from the journal published in the past two years have been cited in that particular year. An Impact Factor of 1.0 means that, on average, the articles published one or two year ago have been cited one time.

OR

The Impact Factor is a journal-level metric defined as the number of citations in a given year made to all content that a journal has published in the previous 2 years, divided by the total number of citable items published in the journal in the previous 2 years—in effect citations are counted over a standard time window and controlled for differences in journal size.

Calculation

[2]: In a given year, the impact factor of a journal is the average number of citations received per paper published in that journal during the two preceding years. For example, if Journal of Bone and Joint Surgery Am has an impact factor of 3.2 in 2012, then its papers published in 2010 and 2011 have received 3.2 citations each on average in 2012. The 2012 impact factor of a journal would be calculated as follows:

A = the number of times that articles published in that journal in 2010 and 2011, were cited by articles in indexed journals during 2012.

B = the total number of "citable items" published by that journal in 2010 and 2011. ("Citable items" are usually articles, reviews, proceedings, or notes; not editorials or letters to the editor.)

2012 impact factor = A/B.

(Note that 2012 impact factors are actually published in 2013; they cannot be calculated until all of the 2012 publications have been processed by the indexing agency.)

 Importance of Impact Factor

: The impact factor of a journal reflects the frequency with which the journal's articles are cited in the scientific literature. Conceptually developed in the 1960s, impact factor has gained acceptance as a quantitative measure of journal quality. Impact factor is used by librarians in selecting journals for library collections, and, in some countries, it is used to evaluate individual scientists and institutions for the purposes of academic promotion and funding allocation. Impact factor is commonly used as a tool for managing scientific library collections. Librarians faced with finite budgets must make rational choices when selecting journals for their departments and institutions. Impact factor helps guide those choices by determining which journals are most frequently cited. Journals that are cited frequently generally contain articles describing the most notable scientific advances (i.e., those with the greatest “impact”) in a given field and are therefore of greatest interest to researchers, teachers, and students in most scientific disciplines.

 

Controversial observations related to impact factor:

-Scientific journals rank higher than clinical journals.

· English-language journals score higher than those in other languages.

· American journals tend to have higher impact factors than European journals.

· Review journals tend to score higher than those containing original articles.

· Review articles tend to score higher than the articles they cite.

· The most prestigious journals in different specialist areas may have very different impact factors.

· Methodological papers may score much higher than those that provide new data.

· Free electronic access tends to raise the impact factor

 

Limitations of Impact factor:

1. IF does not measure impact of one article but represents a mean of all articles. Thus one highly cited article can skew the IF of one journal and lead to biased representation.

2. A time lag may exist between when the science in a given article gains currency and when it is published, a factor overlooked in the practice of using the 2-year reference period

3. Journal impact factors depend on the research field: high impact factors are likely in journals covering large areas of basic research and less likely in more subject-specific journals.

4. Although Journal Citation Reports includes some non-English journals, the index is heavily skewed toward English-language journals, leaving out important international sources.

5. Researchers may be more likely to pursue fashionable topics that have a higher likelihood of being published in a high-impact journal than to follow important avenues that may not be the as popular.

6. There is no distinction between positive and negative citations: there is no indication in the calculation of a journal’s impact factor whether an article is cited because it contains valuable information or as an example of bad science

 7. Calculation of impact factor is not corrected for self-citations: Self-citation occurs when an author is referred to his/her previous work in a new publication, or when an article of a journal cites other articles published in the same Journal.

 8. Journal’s internet access increases its impact factor as it attracts more readers. Even open access may have the same bearing

 9. Impact factor can be easily manipulated: The way the algorithm of the impact factor is calculated offers scope for deliberate manipulation by journals policy. Publication of a long correspondence or numerous commentaries and other such items, that are likely to collect citations and contribute to the numerator of the equation but are not considered as ‘‘source items’’ and therefore are not included in the denominator, can inflate a journal’s impact factor. On the other hand editors can also increase the number of reviews and technical reports because they have higher citation rates. In the same sense editors can reject articles on narrow or unpopular subjects in favor of papers dealing with subjects appealing to a wider audience, because the latter will receive more citations. Probably many major journals in orthopaedics have stopped publishing case reports for the same reason.

 10. Misused, Misguided and Misnomer: some researchers consider IF to be highly overrated and suggest this measure be renamed in keeping with its actual role, that merely of a time-specific “citation rate index” and nothing more.

 11. The IF of the journal does not describe the impact, importance or quality of individual papers. The IF of a journal describes the overall citations of all papers in that journal, and thus is not a description of the impact on each paper. Especially in clinical scenario where most readers are clinicians and not researchers who publish, the citation rate will be much lower than the real impact of the article.

 12. Journal impact factors will vary with time in both absolute numbers and rankings. It could be argued that within a discipline, rankings of journals are a better reflection of quality than absolute number

 13. Impact factors say nothing about the stringency of the peer review process which is another very important parameter of Journal quality

 14. Editors, who will be judged in part on the change in journal impact factor, may take into account the future citation rates of a manuscript in deciding whether to offer publication. This particular bias is especially applicable in case of Case reports. Many top clinical journals have stopped accepting case reports because they do not generate enough citation and not because of quality or relevance of case report.

 15. A paper that is retracted later may continue to be cited [6] (and not necessarily with the retraction), so adding little to the scientific validity of the journal.

 16. Citation practices are inconsistent. Scientific articles tend to cite only scientific articles, whereas clinical articles cite both scientific and clinical articles, thus increasing the impact factor of scientific journals compared with clinical journals.

17. The 2-year period for the impact factor is arbitrary and not based on any robust, published data, as far as the authors are aware.

 On the whole, the range of impact factors does reflect approximately the scientific standing of a journal. Thus, JBJS Am (current impact factor 3.2) is perceived to be a scientifically `better' journal than, say, JBJS Br (impact factor 2.78). The use of inverted commas for concepts such as scientific standing and scientific quality is to emphasize that although most people will understand these concepts, they are difficult to define and even more difficult to quantitate. While comparisons between journals can be made, over-interpretation leads to potentially inappropriate. On the other hand IF is at least, objective and capable of only modest manipulation, and in most instances it follows intuition and experience. One of the reasons why impact factors have stayed in use is, in part, because they are the best currently available measure. Another reason is that attention to the quality of the journal will further encourage researchers to concentrate their output on high quality rather than high quantity. Authors need to consider the message they wish to pass on to the reader, and then decide the best vehicle. Others have tried to overcome the shortfalls by, for example, assuming that the quality of the specialty journal is similar, and so adjusting for the fact that variations in factors may affect the number of citations of an article (such as, for example, the number of journals and published articles in that field or the current scientific or clinical interest). However, these methodologies are, themselves, subject to limitations.

 Conclusion

: Owing to its simplicity and convenience, the journal impact factor is—and probably will be in the nearest future— the most popular parameter of journal quality. Although serious criticisms have been raised an alternative is yet to be accepted. By recognizing the strengths, weaknesses, and limitations of IF, we are better prepared for a critical analysis of scientific journal developments.

 References:

1. Garfield E. Citation analysis as a tool in journal evaluation. Science 1972;178:471-9.

2. http://en.wikipedia.org/wiki/Impact_factor

3. Hecht F, Hecht BK, Sandberg AA. The journal "impact factor": a misnamed, misleading, misused measure. Cancer Genet Cytogenet. 1998 Jul 15;104(2):77-81.

4. Bornmann L, Marx W, Gasparyan AY, Kitas GD. Diversity, value and limitations of the journal impact factor and alternative metrics. Rheumatol Int. 2012 Jul;32(7):1861-7

5. Hansson S. Impact factor as a misleading tool in evaluation of medical journals. Lancet. 1995 Sep 30;346(8979):906.

6. Neuberger J, Counsell C. Impact factors: uses and abuses. Eur J Gastroenterol  Hepatol. 2002 Mar;14(3):209-11.

7. Grzybowski A. Impact factor--strengths and weaknesses. Clin Dermatol. 2010 Jul-Aug;28(4):455-7.


 Join the Most Unique Orthopaedic Platform 

JOIN IORG - CLICK HERE


 Disclaimer : This is not a original article or a review and sections from various articles [as referenced] are been compiled to prepare  an overview. All credit belongs to the original authors.

Compiled by Dr Ashok Shyam