417 research outputs found

    Impact factors of dermatological journals for 1991 – 2000

    Get PDF
    BACKGROUND: The impact factors of scientific journals are interesting but not unproblematic. It is speculated that the number of journals in which citations can be made correlates with the impact factors in any given speciality. METHODS: Using the Journal Citation Report (JCR) for 1997, a bibliometric analysis was made to assess the correlation between the number of journals available in different fields of clinical medicine and the top impact factor. A detailed study was made of dermatological journals listed in the JCR 1991–2000, to assess the relevance of this general survey. RESULTS: Using the 1997 JCR definitions of speciality journals, a significant linear correlation was found between the number of journals in a given field and the top impact factor of that field (rs = 0.612, p < 0.05). Studying the trend for dermatological journals 1991 to 2000 a similar pattern was found. Significant correlations were also found between total number of journals and mean impact factor (rs = 0.793, p = 0.006), between the total number of journals and the top impact factor (rs = 0.759, p = 0.011) and between the mean and the top impact factor (rs = 0.827, p = 0.003). CONCLUSIONS: The observations suggest that the number of journals available predict the top impact factor. For dermatology journals the top and the mean impact factor are predicted. This is in good agreement with theoretical expectations as more journals make more print-space available for more papers containing citations. It is suggested that new journals in dermatology should be encouraged, as this will most likely increase the impact factor of dermatological journals generally

    The success-index: an alternative approach to the h-index for evaluating an individual's research output

    Get PDF
    Among the most recent bibliometric indicators for normalizing the differences among fields of science in terms of citation behaviour, Kosmulski (J Informetr 5(3):481-485, 2011) proposed the NSP (number of successful paper) index. According to the authors, NSP deserves much attention for its great simplicity and immediate meaning— equivalent to those of the h-index—while it has the disadvantage of being prone to manipulation and not very efficient in terms of statistical significance. In the first part of the paper, we introduce the success-index, aimed at reducing the NSP-index's limitations, although requiring more computing effort. Next, we present a detailed analysis of the success-index from the point of view of its operational properties and a comparison with the h-index's ones. Particularly interesting is the examination of the success-index scale of measurement, which is much richer than the h-index's. This makes success-index much more versatile for different types of analysis—e.g., (cross-field) comparisons of the scientific output of (1) individual researchers, (2) researchers with different seniority, (3) research institutions of different size, (4) scientific journals, etc

    ResearchGate versus Google Scholar: Which finds more early citations?

    Get PDF
    ResearchGate has launched its own citation index by extracting citations from documents uploaded to the site and reporting citation counts on article profile pages. Since authors may upload preprints to ResearchGate, it may use these to provide early impact evidence for new papers. This article assesses the whether the number of citations found for recent articles is comparable to other citation indexes using 2675 recently-published library and information science articles. The results show that in March 2017, ResearchGate found less citations than did Google Scholar but more than both Web of Science and Scopus. This held true for the dataset overall and for the six largest journals in it. ResearchGate correlated most strongly with Google Scholar citations, suggesting that ResearchGate is not predominantly tapping a fundamentally different source of data than Google Scholar. Nevertheless, preprint sharing in ResearchGate is substantial enough for authors to take seriously

    The substantive and practical significance of citation impact differences between institutions: Guidelines for the analysis of percentiles using effect sizes and confidence intervals

    Full text link
    In our chapter we address the statistical analysis of percentiles: How should the citation impact of institutions be compared? In educational and psychological testing, percentiles are already used widely as a standard to evaluate an individual's test scores - intelligence tests for example - by comparing them with the percentiles of a calibrated sample. Percentiles, or percentile rank classes, are also a very suitable method for bibliometrics to normalize citations of publications in terms of the subject category and the publication year and, unlike the mean-based indicators (the relative citation rates), percentiles are scarcely affected by skewed distributions of citations. The percentile of a certain publication provides information about the citation impact this publication has achieved in comparison to other similar publications in the same subject category and publication year. Analyses of percentiles, however, have not always been presented in the most effective and meaningful way. New APA guidelines (American Psychological Association, 2010) suggest a lesser emphasis on significance tests and a greater emphasis on the substantive and practical significance of findings. Drawing on work by Cumming (2012) we show how examinations of effect sizes (e.g. Cohen's d statistic) and confidence intervals can lead to a clear understanding of citation impact differences

    The role of mentorship in protege performance

    Full text link
    The role of mentorship on protege performance is a matter of importance to academic, business, and governmental organizations. While the benefits of mentorship for proteges, mentors and their organizations are apparent, the extent to which proteges mimic their mentors' career choices and acquire their mentorship skills is unclear. Here, we investigate one aspect of mentor emulation by studying mentorship fecundity---the number of proteges a mentor trains---with data from the Mathematics Genealogy Project, which tracks the mentorship record of thousands of mathematicians over several centuries. We demonstrate that fecundity among academic mathematicians is correlated with other measures of academic success. We also find that the average fecundity of mentors remains stable over 60 years of recorded mentorship. We further uncover three significant correlations in mentorship fecundity. First, mentors with small mentorship fecundity train proteges that go on to have a 37% larger than expected mentorship fecundity. Second, in the first third of their career, mentors with large fecundity train proteges that go on to have a 29% larger than expected fecundity. Finally, in the last third of their career, mentors with large fecundity train proteges that go on to have a 31% smaller than expected fecundity.Comment: 23 pages double-spaced, 4 figure

    Reviewing, indicating, and counting books for modern research evaluation systems

    Get PDF
    In this chapter, we focus on the specialists who have helped to improve the conditions for book assessments in research evaluation exercises, with empirically based data and insights supporting their greater integration. Our review highlights the research carried out by four types of expert communities, referred to as the monitors, the subject classifiers, the indexers and the indicator constructionists. Many challenges lie ahead for scholars affiliated with these communities, particularly the latter three. By acknowledging their unique, yet interrelated roles, we show where the greatest potential is for both quantitative and qualitative indicator advancements in book-inclusive evaluation systems.Comment: Forthcoming in Glanzel, W., Moed, H.F., Schmoch U., Thelwall, M. (2018). Springer Handbook of Science and Technology Indicators. Springer Some corrections made in subsection 'Publisher prestige or quality

    A Rejoinder on Energy versus Impact Indicators

    Get PDF
    Citation distributions are so skewed that using the mean or any other central tendency measure is ill-advised. Unlike G. Prathap's scalar measures (Energy, Exergy, and Entropy or EEE), the Integrated Impact Indicator (I3) is based on non-parametric statistics using the (100) percentiles of the distribution. Observed values can be tested against expected ones; impact can be qualified at the article level and then aggregated.Comment: Scientometrics, in pres

    Metrics to evaluate research performance in academic institutions: A critique of ERA 2010 as applied in forestry and the indirect H2 index as a possible alternative

    Full text link
    Excellence for Research in Australia (ERA) is an attempt by the Australian Research Council to rate Australian universities on a 5-point scale within 180 Fields of Research using metrics and peer evaluation by an evaluation committee. Some of the bibliometric data contributing to this ranking suffer statistical issues associated with skewed distributions. Other data are standardised year-by-year, placing undue emphasis on the most recent publications which may not yet have reliable citation patterns. The bibliometric data offered to the evaluation committees is extensive, but lacks effective syntheses such as the h-index and its variants. The indirect H2 index is objective, can be computed automatically and efficiently, is resistant to manipulation, and a good indicator of impact to assist the ERA evaluation committees and to similar evaluations internationally.Comment: 19 pages, 6 figures, 7 tables, appendice

    Bibliometrics of systematic reviews : analysis of citation rates and journal impact factors

    Get PDF
    Background: Systematic reviews are important for informing clinical practice and health policy. The aim of this study was to examine the bibliometrics of systematic reviews and to determine the amount of variance in citations predicted by the journal impact factor (JIF) alone and combined with several other characteristics. Methods: We conducted a bibliometric analysis of 1,261 systematic reviews published in 2008 and the citations to them in the Scopus database from 2008 to June 2012. Potential predictors of the citation impact of the reviews were examined using descriptive, univariate and multiple regression analysis. Results: The mean number of citations per review over four years was 26.5 (SD +/-29.9) or 6.6 citations per review per year. The mean JIF of the journals in which the reviews were published was 4.3 (SD +/-4.2). We found that 17% of the reviews accounted for 50% of the total citations and 1.6% of the reviews were not cited. The number of authors was correlated with the number of citations (r = 0.215, P =5.16) received citations in the bottom quartile (eight or fewer), whereas 9% of reviews published in the lowest JIF quartile (<=2.06) received citations in the top quartile (34 or more). Six percent of reviews in journals with no JIF were also in the first quartile of citations. Conclusions: The JIF predicted over half of the variation in citations to the systematic reviews. However, the distribution of citations was markedly skewed. Some reviews in journals with low JIFs were well-cited and others in higher JIF journals received relatively few citations; hence the JIF did not accurately represent the number of citations to individual systematic reviews

    Metric-based vs peer-reviewed evaluation of a research output: Lesson learnt from UK’s national research assessment exercise

    Get PDF
    Purpose There is a general inquisition regarding the monetary value of a research output, as a substantial amount of funding in modern academia is essentially awarded to good research presented in the form of journal articles, conferences papers, performances, compositions, exhibitions, books and book chapters etc., which, eventually leads to another question if the value varies across different disciplines. Answers to these questions will not only assist academics and researchers, but will also help higher education institutions (HEIs) make informed decisions in their administrative and research policies. Design and methodology To examine both the questions, we applied the United Kingdom’s recently concluded national research assessment exercise known as the Research Excellence Framework (REF) 2014 as a case study. All the data for this study is sourced from the openly available publications which arose from the digital repositories of REF’s results and HEFCE’s funding allocations. Findings A world leading output earns between £7504 and £14,639 per year within the REF cycle, whereas an internationally excellent output earns between £1876 and £3659, varying according to their area of research. Secondly, an investigation into the impact rating of 25315 journal articles submitted in five areas of research by UK HEIs and their awarded funding revealed a linear relationship between the percentage of quartile-one journal publications and percentage of 4* outputs in Clinical Medicine, Physics and Psychology/Psychiatry/Neuroscience UoAs, and no relationship was found in the Classics and Anthropology/Development Studies UoAs, due to the fact that most publications in the latter two disciplines are not journal articles. Practical implications The findings provide an indication of the monetary value of a research output, from the perspectives of government funding for research, and also what makes a good output, i.e. whether a relationship exists between good quality output and the source of its publication. The findings may also influence future REF submission strategies in HEIs and ascertain that the impact rating of the journals is not necessarily a reflection of the quality of research in every discipline, and this may have a significant influence on the future of scholarly communications in general. Originality According to the author’s knowledge, this is the first time an investigation has estimated the monetary value of a good research output
    corecore