5,098 research outputs found

    Abandon Statistical Significance

    Get PDF
    We discuss problems the null hypothesis significance testing (NHST) paradigm poses for replication and more broadly in the biomedical and social sciences as well as how these problems remain unresolved by proposals involving modified p-value thresholds, confidence intervals, and Bayes factors. We then discuss our own proposal, which is to abandon statistical significance. We recommend dropping the NHST paradigm--and the p-value thresholds intrinsic to it--as the default statistical paradigm for research, publication, and discovery in the biomedical and social sciences. Specifically, we propose that the p-value be demoted from its threshold screening role and instead, treated continuously, be considered along with currently subordinate factors (e.g., related prior evidence, plausibility of mechanism, study design and data quality, real world costs and benefits, novelty of finding, and other factors that vary by research domain) as just one among many pieces of evidence. We have no desire to "ban" p-values or other purely statistical measures. Rather, we believe that such measures should not be thresholded and that, thresholded or not, they should not take priority over the currently subordinate factors. We also argue that it seldom makes sense to calibrate evidence as a function of p-values or other purely statistical measures. We offer recommendations for how our proposal can be implemented in the scientific publication process as well as in statistical decision making more broadly

    Polling bias and undecided voter allocations: US Presidential elections, 2004 - 2016

    Full text link
    Accounting for undecided and uncertain voters is a challenging issue for predicting election results from public opinion polls. Undecided voters typify the uncertainty of swing voters in polls but are often ignored or allocated to each candidate in a simple, deterministic manner. Historically this may have been adequate because the undecided were comparatively small enough to assume that they do not affect the relative proportions of the decided voters. However, in the presence of high numbers of undecided voters, these static rules may in fact bias election predictions from election poll authors and meta-poll analysts. In this paper, we examine the effect of undecided voters in the 2016 US presidential election to the previous three presidential elections. We show there were a relatively high number of undecided voters over the campaign and on election day, and that the allocation of undecided voters in this election was not consistent with two-party proportional (or even) allocations. We find evidence that static allocation regimes are inadequate for election prediction models and that probabilistic allocations may be superior. We also estimate the bias attributable to polling agencies, often referred to as "house effects".Comment: 32 pages, 9 figures, 6 table

    Efficient computation of the first passage time distribution of the generalized master equation by steady-state relaxation

    Full text link
    The generalized master equation or the equivalent continuous time random walk equations can be used to compute the macroscopic first passage time distribution (FPTD) of a complex stochastic system from short-term microscopic simulation data. The computation of the mean first passage time and additional low-order FPTD moments can be simplified by directly relating the FPTD moment generating function to the moments of the local FPTD matrix. This relationship can be physically interpreted in terms of steady-state relaxation, an extension of steady-state flow. Moreover, it is amenable to a statistical error analysis that can be used to significantly increase computational efficiency. The efficiency improvement can be extended to the FPTD itself by modelling it using a Gamma distribution or rational function approximation to its Laplace transform

    The hydrogenation of metals upon interaction with water

    Get PDF
    Hydrogen evolution at 600 deg and 5 x 10 to the 7th power - 10 to the 6th power torr from AVOOO Al samples, which were pickled in 10 percent NaOH, is discussed. An H evolution kinetic equation is derived for samples of equal vol. and different surfaces (5 and 20 sq cm). The values of the H evolution coefficient K indicated an agreement with considered H diffusion mechanism through an oxide layer. The activation energy for the H evolution process, obtained from the K-temp. relation, was 13,000 2000 cal/g-atom

    Strongly Coupled Quark Gluon Plasma (SCQGP)

    Full text link
    We propose that the reason for the non-ideal behavior seen in lattice simulation of quark gluon plasma (QGP) and relativistic heavy ion collisions (URHICs) experiments is that the QGP near T_c and above is strongly coupled plasma (SCP), i.e., strongly coupled quark gluon plasma (SCQGP). It is remarkable that the widely used equation of state (EoS) of SCP in QED (quantum electrodynamics) very nicely fits lattice results on all QGP systems, with proper modifications to include color degrees of freedom and running coupling constant. Results on pressure in pure gauge, 2-flavors and 3-flavors QGP, are all can be explained by treating QGP as SCQGP as demonstated here.Energy density and speed of sound are also presented for all three systems. We further extend the model to systems with finite quark mass and a reasonably good fit to lattice results are obtained for (2+1)-flavors and 4-flavors QGP. Hence it is the first unified model, namely SCQGP, to explain the non-ideal QGP seen in lattice simulations with just two system dependent parameters.Comment: Revised with corrections and new results, Latex file (11 pages), postscript file of 7 figure

    Hydrogen absorption in solid aluminum during high-temperature steam oxidation

    Get PDF
    Hydrogen is emitted by aluminum heated in a vacuum after high-temperature steam treatment. Wire samples are tested for this effect, showing dependence on surface area. Two different mechanisms of absorption are inferred, and reactions deduced

    Cosmology with the lights off: Standard sirens in the Einstein Telescope era

    Full text link
    We explore the prospects for constraining cosmology using gravitational-wave (GW) observations of neutron-star binaries by the proposed Einstein Telescope (ET), exploiting the narrowness of the neutron-star mass function. Double neutron-star (DNS) binaries are expected to be one of the first sources detected after "first-light" of Advanced LIGO and are expected to be detected at a rate of a few tens per year in the advanced era. However the proposed ET could catalog tens of thousands per year. Combining the measured source redshift distributions with GW-network distance determinations will permit not only the precision measurement of background cosmological parameters, but will provide an insight into the astrophysical properties of these DNS systems. Of particular interest will be to probe the distribution of delay times between DNS-binary creation and subsequent merger, as well as the evolution of the star-formation rate density within ET's detection horizon. Keeping H_0, \Omega_{m,0} and \Omega_{\Lambda,0} fixed and investigating the precision with which the dark-energy equation-of-state parameters could be recovered, we found that with 10^5 detected DNS binaries we could constrain these parameters to an accuracy similar to forecasted constraints from future CMB+BAO+SNIa measurements. Furthermore, modeling the merger delay-time distribution as a power-law, and the star-formation rate (SFR) density as a parametrized version of the Porciani and Madau SF2 model, we find that the associated astrophysical parameters are constrained to within ~ 10%. All parameter precisions scaled as 1/sqrt(N), where N is the number of cataloged detections. We also investigated how precisions varied with the intrinsic underlying properties of the Universe and with the distance reach of the network (which may be affected by the low-frequency cutoff of the detector).Comment: 24 pages, 11 figures, 6 tables. Minor changes to reflect published version. References updated and correcte

    Modeling GRB 050904: Autopsy of a Massive Stellar Explosion at z=6.29

    Get PDF
    GRB 050904 at redshift z=6.29, discovered and observed by Swift and with spectroscopic redshift from the Subaru telescope, is the first gamma-ray burst to be identified from beyond the epoch of reionization. Since the progenitors of long gamma-ray bursts have been identified as massive stars, this event offers a unique opportunity to investigate star formation environments at this epoch. Apart from its record redshift, the burst is remarkable in two respects: first, it exhibits fast-evolving X-ray and optical flares that peak simultaneously at t~470 s in the observer frame, and may thus originate in the same emission region; and second, its afterglow exhibits an accelerated decay in the near-infrared (NIR) from t~10^4 s to t~3 10^4 s after the burst, coincident with repeated and energetic X-ray flaring activity. We make a complete analysis of available X-ray, NIR, and radio observations, utilizing afterglow models that incorporate a range of physical effects not previously considered for this or any other GRB afterglow, and quantifying our model uncertainties in detail via Markov Chain Monte Carlo analysis. In the process, we explore the possibility that the early optical and X-ray flare is due to synchrotron and inverse Compton emission from the reverse shock regions of the outflow. We suggest that the period of accelerated decay in the NIR may be due to suppression of synchrotron radiation by inverse Compton interaction of X-ray flare photons with electrons in the forward shock; a subsequent interval of slow decay would then be due to a progressive decline in this suppression. The range of acceptable models demonstrates that the kinetic energy and circumburst density of GRB 050904 are well above the typical values found for low-redshift GRBs.Comment: 45 pages, 7 figures, and ApJ accepted. Revised version, minor modifications and 1 extra figur
    corecore