22 research outputs found
Recommended from our members
Ten simple rules for writing Dockerfiles for reproducible data science.
Computational science has been greatly improved by the use of containers for packaging software and data dependencies. In a scholarly context, the main drivers for using these containers are transparency and support of reproducibility; in turn, a workflow's reproducibility can be greatly affected by the choices that are made with respect to building containers. In many cases, the build process for the container's image is created from instructions provided in a Dockerfile format. In support of this approach, we present a set of rules to help researchers write understandable Dockerfiles for typical data science workflows. By following the rules in this article, researchers can create containers suitable for sharing with fellow scientists, for including in scholarly communication such as education or scientific papers, and for effective and sustainable personal workflows
The reliability of replications: a study in computational reproductions
This study investigates researcher variability in computational reproduction, an activity for which it is least expected. Eighty-five independent teams attempted numerical replication of results from an original study of policy preferences and immigration. Reproduction teams were randomly grouped into a ‘transparent group’ receiving original study and code or ‘opaque group’ receiving only a method and results description and no code. The transparent group mostly verified original results (95.7% same sign and p-value cutoff), while the opaque group had less success (89.3%). Second-decimal place exact numerical reproductions were less common (76.9 and 48.1%). Qualitative investigation of the workflows revealed many causes of error, including mistakes and procedural variations. When curating mistakes, we still find that only the transparent group was reliably successful. Our findings imply a need for transparency, but also more. Institutional checks and less subjective difficulty for researchers ‘doing reproduction’ would help, implying a need for better training. We also urge increased awareness of complexity in the research process and in ‘push button’ replications
Opening practice: Supporting Reproducibility and Critical Spatial Data Science
This paper reflects on a number of trends towards a more open and reproducible approach to geographic and spatial data science over recent years. In particular, it considers trends towards Big Data, and the impacts this is having on spatial data analysis and modelling. It identifies a turn in academia towards coding as a core analytic tool, and away from proprietary software tools offering ‘black boxes’ where the internal workings of the analysis are not revealed. It is argued that this closed form software is problematic and considers a number of ways in which issues identified in spatial data analysis (such as the MAUP) could be overlooked when working with closed tools, leading to problems of interpretation and possibly inappropriate actions and policies based on these. In addition, this paper considers the role that reproducible and open spatial science may play in such an approach, taking into account the issues raised. It highlights the dangers of failing to account for the geographical properties of data, now that all data are spatial (they are collected somewhere), the problems of a desire for n = all observations in data science and it identifies the need for a critical approach. This is one in which openness, transparency, sharing and reproducibility provide a mantra for defensible and robust spatial data science
The reliability of replications: a study in computational reproductions
This study investigates researcher variability in computational reproduction, an activity for which it is least expected. Eighty-five independent teams attempted numerical replication of results from an original study of policy preferences and immigration. Reproduction teams were randomly grouped into a ‘transparent group’ receiving original study and code or ‘opaque group’ receiving only a method and results description and no code. The transparent group mostly verified original results (95.7% same sign and p-value cutoff), while the opaque group had less success (89.3%). Second-decimal place exact numerical reproductions were less common (76.9 and 48.1%). Qualitative investigation of the workflows revealed many causes of error, including mistakes and procedural variations. When curating mistakes, we still find that only the transparent group was reliably successful. Our findings imply a need for transparency, but also more. Institutional checks and less subjective difficulty for researchers ‘doing reproduction’ would help, implying a need for better training. We also urge increased awareness of complexity in the research process and in ‘push button’ replications
Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty
This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team’s workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings
