413 research outputs found

    Finding the way forward for forensic science in the US:a commentary on the PCAST report

    Get PDF
    A recent report by the US President’s Council of Advisors on Science and Technology (PCAST) [1] has made a number of recommendations for the future development of forensic science. Whereas we all agree that there is much need for change, we find that the PCAST report recommendations are founded on serious misunderstandings. We explain the traditional forensic paradigms of match and identification and the more recent foundation of the logical approach to evidence evaluation. This forms the groundwork for exposing many sources of confusion in the PCAST report. We explain how the notion of treating the scientist as a black box and the assignment of evidential weight through error rates is overly restrictive and misconceived. Our own view sees inferential logic, the development of calibrated knowledge and understanding of scientists as the core of the advance of the profession

    Theoretical value of the recommended expanded European Standard Set of STR loci for the identification of human remains

    Get PDF
    We have undertaken a series of simulations to assess the effectiveness of commercially available sets of STR loci, including the loci recommended for inclusion in the expanded European Standard Set, for the purpose of human identification. A total of 9200 genotype simulations were performed using DNA · VIEW. The software was used to calculate likelihood ratios (LRs) for 23 groups of relatives, and to determine the probability of identification given scenarios that ranged between 10 and 250,000 victims. The additional loci included in the recommended expanded European Standard Set, when used in conjunction with the Identifiler® kit, significantly improved the typical LRs for tested scenarios and the likely success of providing correct identifications

    A response to “Likelihood ratio as weight of evidence: a closer look” by Lund and Iyer

    Get PDF
    Recently, Lund and Iyer (L&I) raised an argument regarding the use of likelihood ratios in court. In our view, their argument is based on a lack of understanding of the paradigm. L&I argue that the decision maker should not accept the expert’s likelihood ratio without further consideration. This is agreed by all parties. In normal practice, there is often considerable and proper exploration in court of the basis for any probabilistic statement. We conclude that L&I argue against a practice that does not exist and which no one advocates. Further we conclude that the most informative summary of evidential weight is the likelihood ratio. We state that this is the summary that should be presented to a court in every scientific assessment of evidential weight with supporting information about how it was constructed and on what it was based

    Impact of allelic dropout on evidential value of forensic DNA profiles using RMNE

    Get PDF
    Motivation: Two methods are commonly used to report on evidence carried by forensic DNA profiles: the ‘Random Man Not Excluded’ (RMNE) approach and the likelihood ratio (LR) approach. It is often claimed a major advantage of the LR method that dropout can be assessed probabilistically

    Development of European standards for evaluative reporting in forensic science : The gap between intentions and perceptions

    Get PDF
    Criminal justice authorities of EU countries currently engage in dialogue and action to build a common area of justice and to help increase the mutual trust in judicial systems across Europe. This includes, for example, the strengthening of procedural safeguards for citizens in criminal proceedings by promoting principles such as equality of arms. Improving the smooth functioning of judicial processes is also pursued by works of expert working groups in the field of forensic science, such as the working parties under the auspices of the European Network of Forensic Science Institutes (ENFSI). This network aims to share knowledge, exchange experiences and come to mutual agreements in matters concerning forensic science practice, among them the interpretation of results of forensic examinations. For example, through its Monopoly Programmes (financially supported by the European Commission), ENFSI has funded a series of projects that come under the general theme ‘Strengthening the Evaluation of Forensic Results across Europe’. Although these initiatives reflect a strong commitment to mutual understanding on general principles of forensic interpretation, the development of standards for evaluation and reporting, including roadmaps for implementation within the ENFSI community, are fraught with conceptual and practical hurdles. In particular, experience through consultations with forensic science practitioners shows that there is a considerable gap between the intentions of a harmonised view on principles of forensic interpretation and the way in which works towards such common understanding are perceived in the community. In this paper, we will review and discuss several recurrently raised concerns. We acknowledge practical constraints such as limited resources for training and education, but we shall also argue that addressing topics in forensic interpretation now is of vital importance because forensic science continues to be challenged by proactive participants in the legal process that tend to become more demanding and less forgiving

    Comments arising from WJ Thompson "Uncertainty in probabilistic genotyping of low template DNA A case study comparing STRmix and TrueAllele"

    Full text link
    Thompson reports a comparison of data from STRmix and TrueAllele. The data he has arises from different inputs to the two software. If the input data are made more similar the outputs become more similar. Thompson argues that the Analytical Threshold, AT, should be varied in casework. This produced different LRs but the analyst would be left deciding what to do with these options. This cannot be based on the LRs but should be based on whether any movement in the AT adds reliable or unreliable data. This is how most laboratories set their AT in the first place. Hence it is pointless, and potentially dangerous, to experimentally vary the AT in casework. The profile is low level and shows at most three peaks. Thompson argues that LR results assuming that the number of contributors (NoC) is 2 or 3 should be reported. Uncertainty in NoC should be treated as a nuisance variable and summed out.Comment: 9 pages 1 figur

    A logical framework for forensic DNA interpretation

    Get PDF
    The forensic community has devoted much effort over the last decades to the development of a logical framework for forensic interpretation, which is essential for the safe administration of justice. We review the research and guidelines that have been published and provide examples of how to implement them in casework. After a discussion on uncertainty in the criminal trial and the roles that the DNA scientist may take, we present the principles of interpretation for evaluative reporting. We show how their application helps to avoid a common fallacy and present strategies that DNA scientists can apply so that they do not transpose the conditional. We then discuss the hierarchy of propositions and explain why it is considered a fundamental concept for the evaluation of biological results and the differences between assessing results given propositions that are at the source level or the activity level. We show the importance of pre-assessment, especially when the questions relate to the alleged activities, and when transfer and persistence need to be considered by the scientists to guide the court. We conclude with a discussion on statement writing and testimony. This provides guidance on how DNA scientists can report in a balanced, transparent, and logical way

    Bayesian hierarchical random effects models in forensic science

    Get PDF
    Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate

    The efficacy of DNA mixture to mixture matching

    Get PDF
    Crown Copyright © 2019 Published by Elsevier B.V. This manuscript version is made available under the CC-BY-NC-ND 4.0 license: http://creativecommons.org/licenses/by-nc-nd/4.0/ This author accepted manuscript is made available following 12 month embargo from date of publication (March 2019) in accordance with the publisher’s archiving policyStandard practice in forensic science is to compare a person of interest’s (POI) reference DNA profile with an evidence DNA profile and calculate a likelihood ratio that considers propositions including and excluding the POI as a DNA donor. A method has recently been published that provides the ability to compare two evidence profiles (of any number of contributors and of any level of resolution) comparing propositions that consider the profiles either have a common contributor, or do not have any common contributors. Using this method, forensic analysts can provide intelligence to law enforcement by linking crime scenes when no suspects may be available. The method could also be used as a quality assurance measure to identify potential sample to sample contamination. In this work we analyse a number of constructed mixtures, ranging from two to five contributors, and with known numbers of common contributors, in order to investigate the performance of using likelihood ratios for mixture to mixture comparisons. Our findings demonstrate the ability to identify common donors in DNA mixtures with the power of discrimination depending largely on the least informative mixture of the pair being considered. The ability to match mixtures to mixtures may provide intelligence information to investigators by identifying possible links between cases which otherwise may not have been considered connected
    corecore