654 research outputs found
Normalization at the field level: fractional counting of citations
Van Raan et al. (2010; arXiv:1003.2113) have proposed a new indicator (MNCS)
for field normalization. Since field normalization is also used in the Leiden
Rankings of universities, we elaborate our critique of journal normalization in
Opthof & Leydesdorff (2010; arXiv:1002.2769) in this rejoinder concerning field
normalization. Fractional citation counting thoroughly solves the issue of
normalization for differences in citation behavior among fields. This indicator
can also be used to obtain a normalized impact factor
Scopus's Source Normalized Impact per Paper (SNIP) versus a Journal Impact Factor based on Fractional Counting of Citations
Impact factors (and similar measures such as the Scimago Journal Rankings)
suffer from two problems: (i) citation behavior varies among fields of science
and therefore leads to systematic differences, and (ii) there are no statistics
to inform us whether differences are significant. The recently introduced SNIP
indicator of Scopus tries to remedy the first of these two problems, but a
number of normalization decisions are involved which makes it impossible to
test for significance. Using fractional counting of citations-based on the
assumption that impact is proportionate to the number of references in the
citing documents-citations can be contextualized at the paper level and
aggregated impacts of sets can be tested for their significance. It can be
shown that the weighted impact of Annals of Mathematics (0.247) is not so much
lower than that of Molecular Cell (0.386) despite a five-fold difference
between their impact factors (2.793 and 13.156, respectively)
A Rejoinder on Energy versus Impact Indicators
Citation distributions are so skewed that using the mean or any other central
tendency measure is ill-advised. Unlike G. Prathap's scalar measures (Energy,
Exergy, and Entropy or EEE), the Integrated Impact Indicator (I3) is based on
non-parametric statistics using the (100) percentiles of the distribution.
Observed values can be tested against expected ones; impact can be qualified at
the article level and then aggregated.Comment: Scientometrics, in pres
The substantive and practical significance of citation impact differences between institutions: Guidelines for the analysis of percentiles using effect sizes and confidence intervals
In our chapter we address the statistical analysis of percentiles: How should
the citation impact of institutions be compared? In educational and
psychological testing, percentiles are already used widely as a standard to
evaluate an individual's test scores - intelligence tests for example - by
comparing them with the percentiles of a calibrated sample. Percentiles, or
percentile rank classes, are also a very suitable method for bibliometrics to
normalize citations of publications in terms of the subject category and the
publication year and, unlike the mean-based indicators (the relative citation
rates), percentiles are scarcely affected by skewed distributions of citations.
The percentile of a certain publication provides information about the citation
impact this publication has achieved in comparison to other similar
publications in the same subject category and publication year. Analyses of
percentiles, however, have not always been presented in the most effective and
meaningful way. New APA guidelines (American Psychological Association, 2010)
suggest a lesser emphasis on significance tests and a greater emphasis on the
substantive and practical significance of findings. Drawing on work by Cumming
(2012) we show how examinations of effect sizes (e.g. Cohen's d statistic) and
confidence intervals can lead to a clear understanding of citation impact
differences
The assessment of science: the relative merits of post- publication review, the impact factor, and the number of citations
The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative
Spectrophotometric, chemometric and chromatographic determination of naphazoline hydrochloride and chlorpheniramine maleate in the presence of naphazoline hydrochloride alkaline degradation product
AbstractFour accurate and sensitive methods were developed and validated for determination of naphazoline hydrochloride (NAP) and chlorpheniramine maleate (CLO) in the presence of naphazoline hydrochloride alkaline degradation product (NAP Deg). The first method is a spectrophotometric one , where NAP was determined by the fourth derivative (D4) spectrophotometric method by measuring the peak amplitude at 302nm, while CLO was determined by the second derivative of the ratio spectra (DD2) spectrophotometric method at 276.4nm. The second method is a chemometric-assisted spectrophotometric method in which partial least squares (PLS-1) and partial component regression (PCR) were used for the determination of NAP, CLO and NAP Deg using the information contained in their absorption spectra of ternary mixture. The third method is a TLC-densitometric one where NAP, CLO and NAP Deg were separated using HPTLC silica gel F254 plates using ethyl acetate:methanol:ammonia: (8:2:0.5, by volume) as the developing system followed by densitometric measurement at 245nm. The fourth method is HPLC method where NAP, CLO and NAP Deg were separated using ODS C18 column and a mobile phase consisting of 0.1M KH2PO4 (pH=7):methanol (55:45 v/v) delivered at 1.5mLmin−1 followed by UV detection at 265nm. The proposed methods have been successfully applied to the analysis of NAP and CLO in pharmaceutical formulations without interference from the dosage form additives and the results were statistically compared with a reported method
- …
