633 research outputs found

    The assessment of science: the relative merits of post- publication review, the impact factor, and the number of citations

    Get PDF
    The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative

    A Rejoinder on Energy versus Impact Indicators

    Get PDF
    Citation distributions are so skewed that using the mean or any other central tendency measure is ill-advised. Unlike G. Prathap's scalar measures (Energy, Exergy, and Entropy or EEE), the Integrated Impact Indicator (I3) is based on non-parametric statistics using the (100) percentiles of the distribution. Observed values can be tested against expected ones; impact can be qualified at the article level and then aggregated.Comment: Scientometrics, in pres

    The information sources and journals consulted or read by UK paediatricians to inform their clinical practice and those which they consider important: a questionnaire survey

    Get PDF
    Background: Implementation of health research findings is important for medicine to be evidence-based. Previous studies have found variation in the information sources thought to be of greatest importance to clinicians but publication in peer-reviewed journals is the traditional route for dissemination of research findings. There is debate about whether the impact made on clinicians should be considered as part of the evaluation of research outputs. We aimed to determine first which information sources are generally most consulted by paediatricians to inform their clinical practice, and which sources they considered most important, and second, how many and which peer-reviewed journals they read. Methods: We enquired, by questionnaire survey, about the information sources and academic journals that UK medical paediatric specialists generally consulted, attended or read and considered important to their clinical practice. Results: The same three information sources – professional meetings & conferences, peerreviewed journals and medical colleagues – were, overall, the most consulted or attended and ranked the most important. No one information source was found to be of greatest importance to all groups of paediatricians. Journals were widely read by all groups, but the proportion ranking them first in importance as an information source ranged from 10% to 46%. The number of journals read varied between the groups, but Archives of Disease in Childhood and BMJ were the most read journals in all groups. Six out of the seven journals previously identified as containing best paediatric evidence are the most widely read overall by UK paediatricians, however, only the two most prominent are widely read by those based in the community. Conclusion: No one information source is dominant, therefore a variety of approaches to Continuing Professional Development and the dissemination of research findings to paediatricians should be used. Journals are an important information source. A small number of key ones can be identified and such analysis could provide valuable additional input into the evaluation of clinical research outputs

    Marketing data: Has the rise of impact factor led to the fall of objective language in the scientific article?

    Get PDF
    The language of science should be objective and detached and should place data in the appropriate context. The aim of this commentary was to explore the notion that recent trends in the use of language have led to a loss of objectivity in the presentation of scientific data. The relationship between the value-laden vocabulary and impact factor among fundamental biomedical research and clinical journals has been explored. It appears that fundamental research journals of high impact factors have experienced a rise in value-laden terms in the past 25 years

    A reverse engineering approach to the suppression of citation biases reveals universal properties of citation distributions

    Get PDF
    The large amount of information contained in bibliographic databases has recently boosted the use of citations, and other indicators based on citation numbers, as tools for the quantitative assessment of scientific research. Citations counts are often interpreted as proxies for the scientific influence of papers, journals, scholars, and institutions. However, a rigorous and scientifically grounded methodology for a correct use of citation counts is still missing. In particular, cross-disciplinary comparisons in terms of raw citation counts systematically favors scientific disciplines with higher citation and publication rates. Here we perform an exhaustive study of the citation patterns of millions of papers, and derive a simple transformation of citation counts able to suppress the disproportionate citation counts among scientific domains. We find that the transformation is well described by a power-law function, and that the parameter values of the transformation are typical features of each scientific discipline. Universal properties of citation patterns descend therefore from the fact that citation distributions for papers in a specific field are all part of the same family of univariate distributions.Comment: 9 pages, 6 figures. Supporting information files available at http://filrad.homelinux.or

    What Makes a Great Journal Great in the Sciences? Which Came First, the Chicken or the Egg?

    Get PDF
    The paper is concerned with analysing what makes a great journal great in the sciences, based on quantifiable Research Assessment Measures (RAM). Alternative RAM are discussed, with an emphasis on the Thomson Reuters ISI Web of Science database (hereafter ISI). Various ISI RAM that are calculated annually or updated daily are defined and analysed, including the classic 2-year impact factor (2YIF), 5-year impact factor (5YIF), Immediacy (or zero-year impact factor (0YIF)), Eigenfactor, Article Influence, C3PO (Citation Performance Per Paper Online), h-index, Zinfluence, PI-BETA (Papers Ignored - By Even The Authors), Impact Factor Inflation (IFI), and three new RAM, namely Historical Self-citation Threshold Approval Rating (H-STAR), 2 Year Self-citation Threshold Approval Rating (2Y-STAR), and Cited Article Influence (CAI). The RAM data are analysed for the 6 most highly cited journals in 20 highly-varied and well-known ISI categories in the sciences, where the journals are chosen on the basis of 2YIF. The application to these 20 ISI categories could be used as a template for other ISI categories in the sciences and social sciences, and as a benchmark for newer journals in a range of ISI disciplines. In addition to evaluating the 6 most highly cited journals in each of 20 ISI categories, the paper also highlights the similarities and differences in alternative RAM, finds that several RAM capture similar performance characteristics for the most highly cited scientific journals, determines that PI-BETA is not highly correlated with the other RAM, and hence conveys additional information regarding research performance. In order to provide a meta analysis summary of the RAM, which are predominantly ratios, harmonic mean rankings are presented of the 13 RAM for the 6 most highly cited journals in each of the 20 ISI categories. It is shown that emphasizing THE impact factor, specifically the 2-year impact factor, of a journal to the exclusion of other informative RAM can lead to a distorted evaluation of journal performance and influence on different disciplines, especially in view of inflated journal self citations

    Bibliometrics of systematic reviews : analysis of citation rates and journal impact factors

    Get PDF
    Background: Systematic reviews are important for informing clinical practice and health policy. The aim of this study was to examine the bibliometrics of systematic reviews and to determine the amount of variance in citations predicted by the journal impact factor (JIF) alone and combined with several other characteristics. Methods: We conducted a bibliometric analysis of 1,261 systematic reviews published in 2008 and the citations to them in the Scopus database from 2008 to June 2012. Potential predictors of the citation impact of the reviews were examined using descriptive, univariate and multiple regression analysis. Results: The mean number of citations per review over four years was 26.5 (SD +/-29.9) or 6.6 citations per review per year. The mean JIF of the journals in which the reviews were published was 4.3 (SD +/-4.2). We found that 17% of the reviews accounted for 50% of the total citations and 1.6% of the reviews were not cited. The number of authors was correlated with the number of citations (r = 0.215, P =5.16) received citations in the bottom quartile (eight or fewer), whereas 9% of reviews published in the lowest JIF quartile (<=2.06) received citations in the top quartile (34 or more). Six percent of reviews in journals with no JIF were also in the first quartile of citations. Conclusions: The JIF predicted over half of the variation in citations to the systematic reviews. However, the distribution of citations was markedly skewed. Some reviews in journals with low JIFs were well-cited and others in higher JIF journals received relatively few citations; hence the JIF did not accurately represent the number of citations to individual systematic reviews

    Metric-based vs peer-reviewed evaluation of a research output: Lesson learnt from UK’s national research assessment exercise

    Get PDF
    Purpose There is a general inquisition regarding the monetary value of a research output, as a substantial amount of funding in modern academia is essentially awarded to good research presented in the form of journal articles, conferences papers, performances, compositions, exhibitions, books and book chapters etc., which, eventually leads to another question if the value varies across different disciplines. Answers to these questions will not only assist academics and researchers, but will also help higher education institutions (HEIs) make informed decisions in their administrative and research policies. Design and methodology To examine both the questions, we applied the United Kingdom’s recently concluded national research assessment exercise known as the Research Excellence Framework (REF) 2014 as a case study. All the data for this study is sourced from the openly available publications which arose from the digital repositories of REF’s results and HEFCE’s funding allocations. Findings A world leading output earns between £7504 and £14,639 per year within the REF cycle, whereas an internationally excellent output earns between £1876 and £3659, varying according to their area of research. Secondly, an investigation into the impact rating of 25315 journal articles submitted in five areas of research by UK HEIs and their awarded funding revealed a linear relationship between the percentage of quartile-one journal publications and percentage of 4* outputs in Clinical Medicine, Physics and Psychology/Psychiatry/Neuroscience UoAs, and no relationship was found in the Classics and Anthropology/Development Studies UoAs, due to the fact that most publications in the latter two disciplines are not journal articles. Practical implications The findings provide an indication of the monetary value of a research output, from the perspectives of government funding for research, and also what makes a good output, i.e. whether a relationship exists between good quality output and the source of its publication. The findings may also influence future REF submission strategies in HEIs and ascertain that the impact rating of the journals is not necessarily a reflection of the quality of research in every discipline, and this may have a significant influence on the future of scholarly communications in general. Originality According to the author’s knowledge, this is the first time an investigation has estimated the monetary value of a good research output

    Inflated Impact Factors? The True Impact of Evolutionary Papers in Non-Evolutionary Journals

    Get PDF
    Amongst the numerous problems associated with the use of impact factors as a measure of quality are the systematic differences in impact factors that exist among scientific fields. While in theory this can be circumvented by limiting comparisons to journals within the same field, for a diverse and multidisciplinary field like evolutionary biology, in which the majority of papers are published in journals that publish both evolutionary and non-evolutionary papers, this is impossible. However, a journal's overall impact factor may well be a poor predictor for the impact of its evolutionary papers. The extremely high impact factors of some multidisciplinary journals, for example, are by many believed to be driven mostly by publications from other fields. Despite plenty of speculation, however, we know as yet very little about the true impact of evolutionary papers in journals not specifically classified as evolutionary. Here I present, for a wide range of journals, an analysis of the number of evolutionary papers they publish and their average impact. I show that there are large differences in impact among evolutionary and non-evolutionary papers within journals; while the impact of evolutionary papers published in multidisciplinary journals is substantially overestimated by their overall impact factor, the impact of evolutionary papers in many of the more specialized, non-evolutionary journals is significantly underestimated. This suggests that, for evolutionary biologists, publishing in high-impact multidisciplinary journals should not receive as much weight as it does now, while evolutionary papers in more narrowly defined journals are currently undervalued. Importantly, however, their ranking remains largely unaffected. While journal impact factors may thus indeed provide a meaningful qualitative measure of impact, a fair quantitative comparison requires a more sophisticated journal classification system, together with multiple field-specific impact statistics per journal

    Impact Factor: outdated artefact or stepping-stone to journal certification?

    Full text link
    A review of Garfield's journal impact factor and its specific implementation as the Thomson Reuters Impact Factor reveals several weaknesses in this commonly-used indicator of journal standing. Key limitations include the mismatch between citing and cited documents, the deceptive display of three decimals that belies the real precision, and the absence of confidence intervals. These are minor issues that are easily amended and should be corrected, but more substantive improvements are needed. There are indications that the scientific community seeks and needs better certification of journal procedures to improve the quality of published science. Comprehensive certification of editorial and review procedures could help ensure adequate procedures to detect duplicate and fraudulent submissions.Comment: 25 pages, 12 figures, 6 table
    corecore