41 research outputs found

    Nonstandard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty-nonstandard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for more reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Non-Standard Errors

    Get PDF

    Non-standard errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    A pilot study of the “Continuation of Care” model in “revolving-door” patients

    Full text link
    AbstractPurpose:The efficiency of continuation of care (COC) treatment by inpatient caregivers as compared to treatment administered by outpatient services for “revolving door” psychiatric patients was tested in this study. Number and days of hospitalization were examined.Method:All patients who were hospitalized three times or more during the past 12months were offered continuing follow-up in the ward, by the same staff, instead of being referred to the outpatient department. Information on number and length of hospitalizations before and after initiation of this care model was retrieved from the hospital computerized database.Results:Of the 36 patients meeting the criteria, 35 patients agreed to participate. The number of hospitalizations in the 18months following the index hospitalization was 1.79±3.51 as compared to 4.67±1.79 before the index hospitalization (p = 0.0002), and the number of days of hospitalization 18months after was 24±41.65 as compared to 119.71±69.31 before (p&lt;0.0001).Conclusion:COC via inpatient follow-up significantly reduces the number and length of hospitalizations in “revolving door” psychiatric patients as compared to the traditional system of follow-up in an outpatient clinic.</jats:sec
    corecore