Expertise |
||
Special Report
Judging the Judges
How well and consistently do judges carry out their 'gatekeeping' role? How well do they understand science? And is this court activity assessed in any way?
Daubert Expert Qualifications and Intellectual Rigor (in Kuhmo): differences between scientists, engineers and other types of technical experts
By Thomas F. Schrager, Ph.D.
A key question in the Kuhmo decision was whether Daubert criteria, written in terms of scientific practice, would apply to non scientific expertise. The answer was yes and in some ways this was viewed as a further extension of what has been generally considered to be the strict standards of Daubert to the other areas of expertise. In some ways this is certainly true. But in other ways, while the reach of Daubert was extended and confirmed, by the very nature of this extension its flexibility was also strengthened. The reliability standard, so closely associated with scientific method in Daubert, was split out, to relate to the professional practice of any expertise (which should also include science). Considering reliability as a function of the intellectual activity of the expert, separate from the criteria of Daubert, is a step to greater flexibility on questions that don't fit easily into one of the criteria. This is also reflected in the comment that the four criteria should be applied when appropriate but they are not written in stone and not meant as a checklist. Lastly, the ruling emphasized the importance of the expert's methodology and testimony being on a level of intellectual rigor similar to what would be practiced outside the courtroom, in one's laboratory or other places of professional activity.
What is meant by 'intellectual rigor'? Many immediately equate this to the 95 percent statistical threshold, and while this may reflect one aspect resulting from intellectual rigor, this bright line between statistical significance and non significance hardly encompasses encompasses the wider and more basic intellectual activities in the practice of science (and other areas of expertise as well) that can be referred to as intellectual rigor. It may require much intellectual rigor to arrive at a well conducted study with a 95% statistical finding of significance, it doesn't require much intellectual rigor to abide by the bright 95 percent statistical line of significance.
Intellectual rigor is involved in reviewing data and identifying gaps or contradictions that require further investigation; developing an hypothesis and the appropriate experiment to test the hypothesis, which in turn will address the data gap; run the experiment; collect the data; and analyze it. It reflects the sum total of activities a scientist engages in to carry out the scientific method. When asked for a definition of scientific method, it is not easy to provide a single or simple definition; even the National Academies of Science states that it cannot provide a simple definition. Why?
(continued)
Because scientific method, while including certain rules such as hypothesis testing, reproducibility and statistical significance, also includes a myriad of judgement calls based on one's training and experience. It has to do with the way one interprets data and study protocols, identifying strengths and weaknesses, and inferring from the results of the studies. One good scientist can draw different conclusions from another. Does this mean one is 'right' and the other 'wrong'. Probably not. It is what keeps science alive in its movement forward to continually better understand and refine the findings of certain phenomena--as much from dead ends and wrong leads as from the decisive finding.
Scientists who conduct studies and run laboraties are not in ivory towers; they run a business with very high costs and in which they must be constantly watching the costs and seeking new sources of funding. This entire process is part of the intellectual rigor which may result in a finding with a 95 percent (or 99 or better percent) statistical level of significance. Grant proposals, the primary mode of funding, are highly competitive, with less than one in ten or one in twenty being funded.
The proposed research has to be well thought out, with a high chance of success, since the funding agency will has limited resources and will be investing hundreds of thousands of dollars in a single project. At the same time, the answer cannot be known beforehand or there would be no use in doing the experiments. So such proposals are more than speculation but less than a certainty. They are an informed, well thought-out plan, that sets out a 'more likely than not' proposal, otherwise too much of the funding money would be wasted and this is not shown to be the case.
These plans reflect the intellectual rigor of the investigator. In fact, the evaluation of the proposal is based as much on how the plan is put together, how well thought-out it is, as is the likelihood for postive results.
This fits well with the Kuhmo standard for applying intellectual rigor to one's opinions. It's important to note that whereas in science and in the laboratory as well as in the field, population effects are usually to focus of study, in the courtroom it is often if not usually a single individual. And to determine causation in an individual, statistical analysis or error rates cannnot be applied. Only a judgement call grounded in intellectual rigor.