27 research outputs found

    Registered Replication Report: Dijksterhuis and van Knippenberg (1998)

    Get PDF
    Dijksterhuis and van Knippenberg (1998) reported that participants primed with a category associated with intelligence ("professor") subsequently performed 13% better on a trivia test than participants primed with a category associated with a lack of intelligence ("soccer hooligans"). In two unpublished replications of this study designed to verify the appropriate testing procedures, Dijksterhuis, van Knippenberg, and Holland observed a smaller difference between conditions (2%-3%) as well as a gender difference: Men showed the effect (9.3% and 7.6%), but women did not (0.3% and -0.3%). The procedure used in those replications served as the basis for this multilab Registered Replication Report. A total of 40 laboratories collected data for this project, and 23 of these laboratories met all inclusion criteria. Here we report the meta-analytic results for those 23 direct replications (total N = 4,493), which tested whether performance on a 30-item general-knowledge trivia task differed between these two priming conditions (results of supplementary analyses of the data from all 40 labs, N = 6,454, are also reported). We observed no overall difference in trivia performance between participants primed with the "professor" category and those primed with the "hooligan" category (0.14%) and no moderation by gender

    The Psychological Science Accelerator’s COVID-19 rapid-response dataset

    Get PDF
    In response to the COVID-19 pandemic, the Psychological Science Accelerator coordinated three large-scale psychological studies to examine the effects of loss-gain framing, cognitive reappraisals, and autonomy framing manipulations on behavioral intentions and affective measures. The data collected (April to October 2020) included specific measures for each experimental study, a general questionnaire examining health prevention behaviors and COVID-19 experience, geographical and cultural context characterization, and demographic information for each participant. Each participant started the study with the same general questions and then was randomized to complete either one longer experiment or two shorter experiments. Data were provided by 73,223 participants with varying completion rates. Participants completed the survey from 111 geopolitical regions in 44 unique languages/dialects. The anonymized dataset described here is provided in both raw and processed formats to facilitate re-use and further analyses. The dataset offers secondary analytic opportunities to explore coping, framing, and self-determination across a diverse, global sample obtained at the onset of the COVID-19 pandemic, which can be merged with other time-sampled or geographic data

    The Psychological Science Accelerator’s COVID-19 rapid-response dataset

    Get PDF
    In response to the COVID-19 pandemic, the Psychological Science Accelerator coordinated three large-scale psychological studies to examine the effects of loss-gain framing, cognitive reappraisals, and autonomy framing manipulations on behavioral intentions and affective measures. The data collected (April to October 2020) included specific measures for each experimental study, a general questionnaire examining health prevention behaviors and COVID-19 experience, geographical and cultural context characterization, and demographic information for each participant. Each participant started the study with the same general questions and then was randomized to complete either one longer experiment or two shorter experiments. Data were provided by 73,223 participants with varying completion rates. Participants completed the survey from 111 geopolitical regions in 44 unique languages/dialects. The anonymized dataset described here is provided in both raw and processed formats to facilitate re-use and further analyses. The dataset offers secondary analytic opportunities to explore coping, framing, and self-determination across a diverse, global sample obtained at the onset of the COVID-19 pandemic, which can be merged with other time-sampled or geographic data

    Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability

    No full text
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p lt .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δr =.002 or.014, depending on analytic approach). The median effect size for the revised protocols (r =.05) was similar to that of the RP:P protocols (r =.04) and the original RP:P replications (r =.11), and smaller than that of the original studies (r =.37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r =.07, range =.00–.15) were 78% smaller, on average, than the original effect sizes (median r =.37, range =.19–.50)
    corecore