Statistics and the Evaluation of Evidence for Forensic Scientists. Franco Taroni
LC ebook record available at https://lccn.loc.gov/2020021385
Cover Design: Wiley
Cover Image: © vetalaka/Shutterstock
Dedication
To our families
[T]he revision of opinion in the light of new information ... is one of the most important human intellectual activities. (p. 290)
– Ward Edwards
Edwards, W. (2009). Divide and conquer: how to use likelihood and value judgments in decision making (1973). In: A Science of Decision Making: The Legacy of Ward Edwards (ed. J.W. Weiss and D.J. Weiss), 287–300. Oxford: Oxford University Press.
Foreword
Uncertainty affects nearly everything we do. Virtually every decision we make involves uncertainty of one kind or another. However, uncertainty does not come naturally to people's minds. Whenever we can (and sometimes when we can't), we substitute an imagined certainty that we find more comfortable and easier to plan against.
Statistics offers tools to deal with uncertainty, principally through probability. There are many models and methods in a statistician's toolkit. Which to use when, and how to create more when necessary are the typical tasks facing users of statistical methods. Every application of statistics has to be sensitive to the institutional context in which the problem arises. In the case of forensic evidence, the institutional structure includes both the organizations for which forensic scientists work and the legal structures to which they ultimately report.
The stakes are high in forensic work, as someone's liberty and/or life is typically at stake. As a consequence, a careful consideration of the uncertainties involved is morally imperative. Doing responsible work under these circumstances requires that sources of uncertainty be identified, quantified, and reported, both truthfully and effectively. The first task is to figure out what the principal sources of uncertainty are. For example, DNA analyses often report tiny probabilities that someone other than the defendant would have the same configuration of alleles as those found at a crime scene. But this probability is premised on the assumption that the crime scene and laboratory work have been error‐free. If the probability of contamination from one of these sources is one in a thousand, contamination is the dominant source of uncertainty, and should be reported.
The second task is quantification. Depending on the source of uncertainty, this can be daunting. Records can be examined to find how often collection and lab errors leading to contamination have been discovered, for example, but one is left wondering how many others there may have been that were not discovered. Experiments can help, particularly blind testing in which the technicians do not know they are being tested. Our ability to conduct such tests is in its infancy.
Finally, there is the question of how to report the uncertainty in forensic analyses. The legal structure does not necessarily welcome uncertainty, as it complicates the task of the finders‐of‐facts, whether judges or juries. But it is incumbent on forensic scientists to be both thoughtful and truthful in conveying to the parties and to the court the uncertainties that lurk behind their findings. A shrill proclamation of infallibility does not advance justice.
The legal context has other implications for which statistical methods are most apt. A case involves the innocence and guilt of a particular defendant or group of defendants, faced with a particular set of evidence. As such, methods that rely for justification on long‐run frequencies seem beside the point. One has to do the best one can in this specific instance. Therefore, subjective probability, which focuses on the specifics of the case without embedding it in a hypothetical infinite string of superficially similar cases, is more suited to forensic applications. What are the practical implications of such a choice? It permits forensic scientists to summarize their opinions in a number, such as ‘The probability of a correspondence between the latent print at the crime scene and that of the defendant if the defendant is not the source of the crime scene print is 1%’. That's all very well as a statement of personal belief, but if anyone else is to take such a statement seriously, it must be accompanied by reasons. What assumptions were made in the analysis? What considerations make those assumptions plausible? If other plausible assumptions were made, what would their consequences be? Subjective (or personal) probability is a way of conveying one's opinion in a precise manner, but whether anyone else should pay attention to it depends on the persuasiveness of the arguments that go with it.
The book, Statistics and the Evaluation of Evidence for Forensic Scientists aims to assist forensic scientists and others to do this work well. That it is now in its third edition reflects the success of the previous editions, summarizing what had been found. That a new edition is needed reflects the new thinking and new work that has been done in the last decade and a half. As more progress is made, no doubt further editions will be needed. This edition shows what has been accomplished, and charts the way forward.
J. B. Kadane
December 2019
Preface to Third Edition
In the Preface to the second edition of this book reference was made to the comment in the first edition that the role of statistics in forensic science was continuing to increase and that this was partly because of the debate continuing over DNA profiling that looked as if it would carry on into the foreseeable future. In 2004, the time of the second edition, we wrote that ‘it now appears that the increase is continuing and perhaps at a greater rate than in 1995’ (the time of the first edition). In 2020, we are confident that the increase is still continuing and the need for a third edition is pressing.
With the increase in the availability of data and of computing power, the role of statistical and probabilistic reasoning in the interpretation and evaluation of evidence is even more important than it was at the time of the second edition. The courts are increasingly aware of the importance of the proper assessment of evidence in which there is random variation. Various general publications testify to the need for a new edition of this book.
Four reports published by the Royal Statistical Society on the topic of Communicating and Interpreting Statistical Evidence in the Administration of Criminal Justice (2010–2014) available from https://rss.org.uk/news-publication/publications/our-research/.
Expert Evidence in Criminal Proceedings in England and Wales. The Law Commission of England and Wales, 2011, available from https://s3-eu-west-2.amazonaws.com/lawcom-prod-storage-11jsxou24uy7q/uploads/2015/03/lc325_Expert_Evidence:Report.pdf
A National Academy of Sciences report Strengthening Forensic Science in the United States: A Path Forward (Committee on Identifying the Needs of the Forensic Sciences Community; Committee on Applied and Theoretical Statistics, National Research Council, 2009); available from https://www.ncjrs.gov/pdffiles1/nij/grants/228091.pdf.
European Network for Forensic Science Institutes Guideline for Evaluative Reporting in Forensic Science, 2015; available from http://enfsi.eu/wp-content/uploads/2016/09/m1_guideline.pdf.
The President's Council of Advisors on Science and Technology Report on Forensic Science in Criminal Courts: ensuring Scientific Validity of Feature‐Comparison Methods, 2016; available from https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/PCAST/pcast_forensic_science:report_final.pdf.