Risk Assessment is another type of causation: predicting whether an exposure, usually in a population rather than a single individual, will increase the risk of developing some adverse health effect, without necessarily determining whether such health effect actually occurs. Risk means that all such exposed individuals are more likely to develop an adverse effect, but it does not mean that any particular individual will or even that the majority of individuals will. Smoking, for example, increases risk of lung cancer from approximately one in 100 to 1 in 10--a signficiant increase. Nevertheless, ninety percent of smokers do not develop lung cancer, even though they are all at increased risk. Risk Assessment is used by regulatory agencies, such as the EPA, to determine how much of a chemical can be released into the environment without causing an unacceptable increase in risk of an adverse effect. 'Unacceptable' is more of a policty decision than a scientific one.
Specific causation analysis can be divided into the following three components:
(1) Hazard Assessment
(2) Exposure Assessment
(3) Health Assessment
(1) Hazard Assessment: what harm can the chemical cause, based on intrinsic toxicity and circumstnaces of exposure, form of chemical (gas, liquid, solid) and susceptibility of individual.
(2) Exposure Assessment how much of the chemical is in a media (air, water, food, soil) available to be taken into the body? Is this a one-time or multiple exposure? Does it occur over a short period of time (acute) or over many months or years (chronic)? How does the length of exposure affect the toxicity of the chemical? How much of the chemical gets absorbed into the body (dose), and where in the body does it end up (distribution and target organ/tissue)?
(3) Health Assessment: what type of health effect develops (or gets exacerbated), and does this occur immediately or after a delay (lag time)? Is this a new effect in the individual or is there a history of this type of problem, made worse by the chemical exposure? Is the individual in a high risk group (in utero, infant, elderly, reduced immune function)? Are there other (alternative) known causes for this problem and were these causes present ?
Epidemiological Data: uses and limitations in establishing chemical causation. By Thomas F. Schrager, Ph.D.
The key study cited above, by Kenneth Rothman, from a seminar on science in the courtroom, goes through the different points as to how epidemiological data can be used but also how easily it can be misused and misunderstood. As Rothman points out, there is no single set of criteria to establish the validity of a study or of the data. If there were, one would not need a scientist to determine the outcome, just someone capable of going down a checklist. Secondly, he points out that there are all types of error that can occur in a study and the question is not whether this or that type of error occurred in a particular study but the amount of each type of error, since it is almost inevitable that every kind of error will occur to some degree in every study.
What Rothman states is required to accomplish some type of assessment of the validity of a study is a thorough criticism of each aspect with the goal of quantifying each type of error. This ability is based on the training and experience of the investigator, capable of a penetrating critique. It is not for the lay person--or judge--or even the scientist that doesn't have training in this area.
Rothman goes through various aspects of the epidemiological approach to show that things aren't as simple or as obvious as they may at first seem. Simply 'having' an epidemiological study does not necessarily mean that one has much. He points out that most effects, certainly in chemical toxicity, have multiple causes so that no single cause is either sufficient or necessary. He further points out that each cause typically has multiple components all of which must be present, so that no single factor is usually sufficient or necessary to result in an increase in an adverse effect. 'Sufficient' means the factor alone can result in increased adverse effect; 'necessary' means that all effects flow through this single factor, which is rarely the case, there usually being multiple causes to an effect that act independently.
Rothman discusses the so-called 'Hill Criteria of Causation'--as outlined in a 1965 paper by Bradford HIll--(even though the term 'criteria' is found nowhere in Hill's discussion). It's worth noting that these guidelines were in fact first developed by the authors of the First Surgeon General's report on smoking and lung cancer, as specific guidelines for that the smoking-lung cancer link, and 'not written in stone' (as one of the authors stated) for any other chemical or purpose. Nevertheless these 'criteria' have taken on a life of their own as the 'proof' of whether causation has been established from an epidemiological study.
As an example, the first factor, 'strength of association' suggests that the stronger the association, the more likely it is a causal one. But there are many examples that contradict this. There is a strong association between the number of colored TV sets in one's home and the risk of developing colon cancer, but no one believes that colored TV sets have any type of causal relationship to colon cancer. Instead, this reflects an affluent lifestyle, which does include various characteristics that do increase risk of colon cancer. Likewise, the association, as Roghtman points out, between cardiovascular disease and smoking is quite weak (probably because there are so many causes that are inter-related in complex ways), but few doubt a causal connection between smoking and heart disease.
Another factor Rothman cites is consistency, Hill's position being that the more consistent results from one study to the next, the more likely it's a causal relationship. While this may be true, this may not always be possible and the opposite--the less consistency across studies the more likely it is not a causal relationship--is falacious. Because most causes consist of many components, the absence of one component may preclude a causal effect, but this doesn't mean such factor does not cause such effect. And in many other instances the only difference between studies is size, statistical power, study design, which may result in different outcome without having anything to do with a proof of causal factor.
Rothman goes through each factor in a similar manner, the message being that there is no simple answer and no simple set of 'check-list' criteria to determine whether an epidemiological study has shown causation or is properly designed to. Without knowing how to look at these issues rigorously and looking at each and everyone, not simply might a wrong conclusion be drawn but, in the courtroom setting, a whole other set of data may be set aside or precluded.