1,952 research outputs found
The success-index: an alternative approach to the h-index for evaluating an individual's research output
Among the most recent bibliometric indicators for normalizing the differences among fields of science in terms of citation behaviour, Kosmulski (J Informetr 5(3):481-485, 2011) proposed the NSP (number of successful paper) index. According to the authors, NSP deserves much attention for its great simplicity and immediate meaning— equivalent to those of the h-index—while it has the disadvantage of being prone to manipulation and not very efficient in terms of statistical significance. In the first part of the paper, we introduce the success-index, aimed at reducing the NSP-index's limitations, although requiring more computing effort. Next, we present a detailed analysis of the success-index from the point of view of its operational properties and a comparison with the h-index's ones. Particularly interesting is the examination of the success-index scale of measurement, which is much richer than the h-index's. This makes success-index much more versatile for different types of analysis—e.g., (cross-field) comparisons of the scientific output of (1) individual researchers, (2) researchers with different seniority, (3) research institutions of different size, (4) scientific journals, etc
Bias in the journal impact factor
The ISI journal impact factor (JIF) is based on a sample that may represent
half the whole-of-life citations to some journals, but a small fraction (<10%)
of the citations accruing to other journals. This disproportionate sampling
means that the JIF provides a misleading indication of the true impact of
journals, biased in favour of journals that have a rapid rather than a
prolonged impact. Many journals exhibit a consistent pattern of citation
accrual from year to year, so it may be possible to adjust the JIF to provide a
more reliable indication of a journal's impact.Comment: 9 pages, 8 figures; one reference correcte
The substantive and practical significance of citation impact differences between institutions: Guidelines for the analysis of percentiles using effect sizes and confidence intervals
In our chapter we address the statistical analysis of percentiles: How should
the citation impact of institutions be compared? In educational and
psychological testing, percentiles are already used widely as a standard to
evaluate an individual's test scores - intelligence tests for example - by
comparing them with the percentiles of a calibrated sample. Percentiles, or
percentile rank classes, are also a very suitable method for bibliometrics to
normalize citations of publications in terms of the subject category and the
publication year and, unlike the mean-based indicators (the relative citation
rates), percentiles are scarcely affected by skewed distributions of citations.
The percentile of a certain publication provides information about the citation
impact this publication has achieved in comparison to other similar
publications in the same subject category and publication year. Analyses of
percentiles, however, have not always been presented in the most effective and
meaningful way. New APA guidelines (American Psychological Association, 2010)
suggest a lesser emphasis on significance tests and a greater emphasis on the
substantive and practical significance of findings. Drawing on work by Cumming
(2012) we show how examinations of effect sizes (e.g. Cohen's d statistic) and
confidence intervals can lead to a clear understanding of citation impact
differences
A Rejoinder on Energy versus Impact Indicators
Citation distributions are so skewed that using the mean or any other central
tendency measure is ill-advised. Unlike G. Prathap's scalar measures (Energy,
Exergy, and Entropy or EEE), the Integrated Impact Indicator (I3) is based on
non-parametric statistics using the (100) percentiles of the distribution.
Observed values can be tested against expected ones; impact can be qualified at
the article level and then aggregated.Comment: Scientometrics, in pres
Metrics to evaluate research performance in academic institutions: A critique of ERA 2010 as applied in forestry and the indirect H2 index as a possible alternative
Excellence for Research in Australia (ERA) is an attempt by the Australian
Research Council to rate Australian universities on a 5-point scale within 180
Fields of Research using metrics and peer evaluation by an evaluation
committee. Some of the bibliometric data contributing to this ranking suffer
statistical issues associated with skewed distributions. Other data are
standardised year-by-year, placing undue emphasis on the most recent
publications which may not yet have reliable citation patterns. The
bibliometric data offered to the evaluation committees is extensive, but lacks
effective syntheses such as the h-index and its variants. The indirect H2 index
is objective, can be computed automatically and efficiently, is resistant to
manipulation, and a good indicator of impact to assist the ERA evaluation
committees and to similar evaluations internationally.Comment: 19 pages, 6 figures, 7 tables, appendice
Universality of Performance Indicators based on Citation and Reference Counts
We find evidence for the universality of two relative bibliometric indicators
of the quality of individual scientific publications taken from different data
sets. One of these is a new index that considers both citation and reference
counts. We demonstrate this universality for relatively well cited publications
from a single institute, grouped by year of publication and by faculty or by
department. We show similar behaviour in publications submitted to the arXiv
e-print archive, grouped by year of submission and by sub-archive. We also find
that for reasonably well cited papers this distribution is well fitted by a
lognormal with a variance of around 1.3 which is consistent with the results of
Radicchi, Fortunato, and Castellano (2008). Our work demonstrates that
comparisons can be made between publications from different disciplines and
publication dates, regardless of their citation count and without expensive
access to the whole world-wide citation graph. Further, it shows that averages
of the logarithm of such relative bibliometric indices deal with the issue of
long tails and avoid the need for statistics based on lengthy ranking
procedures.Comment: 15 pages, 14 figures, 11 pages of supplementary material. Submitted
to Scientometric
Reviewing, indicating, and counting books for modern research evaluation systems
In this chapter, we focus on the specialists who have helped to improve the
conditions for book assessments in research evaluation exercises, with
empirically based data and insights supporting their greater integration. Our
review highlights the research carried out by four types of expert communities,
referred to as the monitors, the subject classifiers, the indexers and the
indicator constructionists. Many challenges lie ahead for scholars affiliated
with these communities, particularly the latter three. By acknowledging their
unique, yet interrelated roles, we show where the greatest potential is for
both quantitative and qualitative indicator advancements in book-inclusive
evaluation systems.Comment: Forthcoming in Glanzel, W., Moed, H.F., Schmoch U., Thelwall, M.
(2018). Springer Handbook of Science and Technology Indicators. Springer Some
corrections made in subsection 'Publisher prestige or quality
ResearchGate versus Google Scholar: Which finds more early citations?
ResearchGate has launched its own citation index by extracting citations from documents uploaded to the site and reporting citation counts on article profile pages. Since authors may upload preprints to ResearchGate, it may use these to provide early impact evidence for new papers. This article assesses the whether the number of citations found for recent articles is comparable to other citation indexes using 2675 recently-published library and information science articles. The results show that in March 2017, ResearchGate found less citations than did Google Scholar but more than both Web of Science and Scopus. This held true for the dataset overall and for the six largest journals in it. ResearchGate correlated most strongly with Google Scholar citations, suggesting that ResearchGate is not predominantly tapping a fundamentally different source of data than Google Scholar. Nevertheless, preprint sharing in ResearchGate is substantial enough for authors to take seriously
Bibliometrics of systematic reviews : analysis of citation rates and journal impact factors
Background:
Systematic reviews are important for informing clinical practice and health policy. The aim of this study was to examine the bibliometrics of systematic reviews and to determine the amount of variance in citations predicted by the journal impact factor (JIF) alone and combined with several other characteristics.
Methods:
We conducted a bibliometric analysis of 1,261 systematic reviews published in 2008 and the citations to them in the Scopus database from 2008 to June 2012. Potential predictors of the citation impact of the reviews were examined using descriptive, univariate and multiple regression analysis.
Results:
The mean number of citations per review over four years was 26.5 (SD +/-29.9) or 6.6 citations per review per year. The mean JIF of the journals in which the reviews were published was 4.3 (SD +/-4.2). We found that 17% of the reviews accounted for 50% of the total citations and 1.6% of the reviews were not cited. The number of authors was correlated with the number of citations (r = 0.215, P =5.16) received citations in the bottom quartile (eight or fewer), whereas 9% of reviews published in the lowest JIF quartile (<=2.06) received citations in the top quartile (34 or more). Six percent of reviews in journals with no JIF were also in the first quartile of citations.
Conclusions:
The JIF predicted over half of the variation in citations to the systematic reviews. However, the distribution of citations was markedly skewed. Some reviews in journals with low JIFs were well-cited and others in higher JIF journals received relatively few citations; hence the JIF did not accurately represent the number of citations to individual systematic reviews
Prediction of Emerging Technologies Based on Analysis of the U.S. Patent Citation Network
The network of patents connected by citations is an evolving graph, which
provides a representation of the innovation process. A patent citing another
implies that the cited patent reflects a piece of previously existing knowledge
that the citing patent builds upon. A methodology presented here (i) identifies
actual clusters of patents: i.e. technological branches, and (ii) gives
predictions about the temporal changes of the structure of the clusters. A
predictor, called the {citation vector}, is defined for characterizing
technological development to show how a patent cited by other patents belongs
to various industrial fields. The clustering technique adopted is able to
detect the new emerging recombinations, and predicts emerging new technology
clusters. The predictive ability of our new method is illustrated on the
example of USPTO subcategory 11, Agriculture, Food, Textiles. A cluster of
patents is determined based on citation data up to 1991, which shows
significant overlap of the class 442 formed at the beginning of 1997. These new
tools of predictive analytics could support policy decision making processes in
science and technology, and help formulate recommendations for action
- …
