Common Sense Steps

Frieda Slavin

Science is rarely certain about anything, and certainly not about most links between environmental exposures and health effects in people. Nonetheless, the evidence showing links to health grows ever stronger as research progresses and becomes ever-more sophisticated:

  • Scientists have generated compelling laboratory evidence revealing adverse effects in animals at low levels of exposure, affecting animal endpoints that are relevant to cancer, birth defects, reproductive effects, immune system dysfunctions, respiratory problems, learning and behavior problems, etc.
  • They have demonstrated that many of the underlying mechanisms causing those effects in animals are similar, if not identical, to human mechanisms.
  • They have documented human exposures to chemicals at levels that produce harm of many types in animals.
  • And they have identified trends in human health and disability that can be predicted on the basis of the above.

But establishing scientific certainty of harm to people is elusive at best and in many cases likely to be impossible before countless people would be affected adversely. After all, epidemiology can only establish harm after an epidemic has occurred. Purposefully carrying out controlled experiments on people is considered, appropriately, unethical. And thus we are left with the plethora of uncontrolled, largely unmonitored experiments currently underway because of ubiquitous exposure.

Given these limitations, and given that our current regulatory system is unlikely to strengthen exposure standards absent much firmer proof, what is a person, or a parent, or a family, to do?

Much good, practical advice is available on the web and in print. Some of the best places to turn for practical advice are listed below. In addition to pointing toward these resources, on this page we will highlight a few old themes (“constants”) and then focus on new issues that are emerging from recent research and analysis.

One general point: As you make choices about products to buy, things to do, food to eat, places to go, bear in mind that government standards for regulating environmental threats to health are at best a bare minimum and at worst completely inadequate and health threatening. So what you choose to do should always at least live up to those standards.

This is because government regulations represent a compromise negotiated between advocates for public health and parties, usually companies or trade associations, with an economic interest in protecting their access to the market. The playing field in which the negotiation takes place is strongly biased in favor of the vested interests, who have succeeded over several decades of lobbying to put in place evidentiary standards for proof of harm that make it very difficult to prevent marketing of new products, or removal of old, even in the face of compelling evidence of plausible harm. Decades of experience reinforce that conclusion.

Constants:

  • Smoking harms adults, children and the developing fetus. It’s not just the irritation of the smoke itself, it’s also compounds added to the tobacco, the paper and the filter that make their way into your lungs and your bloodstream. Rules #1-3: don’t smoke; don’t inflict second-hand smoke on someone else; and don’t allow smokers to share their second-hand smoke with you or your family, especially your young children.
  • The fetus is remarkably sensitive to alcohol. Avoid alcoholic beverages during pregnancy. Otherwise impacts can last a lifetime.
  • Ozone damages developing lungs. While studies have shown for some time that ozone can trigger asthmatic attacks, the latest research even implicates ozone in the actual causation of asthma itself. When ozone levels rise and local governments issue air pollution warnings, pay attention. Some local newspapers carry regular ozone notices. They are worth reading and heeding.
  • Pollutants in some fish can damage the fetus, undermining development of disease resistance and cognitive development. Heed fish advisories posted by public health agencies.
    New York State Dept of Health: Health Advisories on Chemicals in Fish
  • Some plastics leach biologically-active materials into food with which they come into contact, particularly when heated. If you must use plastic, at least don’t microwave food in it.

 


Web and print resources

The Children’s Health Environmental Coalition’s HealtheHouse: an interactive resource for parents to learn about simple and effective steps they can take to protect their baby from environmental harm within the home.

The GreenGuide’s product reports: “a one-stop, reliable and easy-to-use shoppers’ guide so that you can make wiser, more conscientious shopping decisions.” Reports available include “flea control,” “insect repellant” and “household cleaning supplies.”

Raising Healthy Children in A Toxic World, a book by Philip Landrigan, Herbert Needleman and Mary Landrigan.

The Resource Guide on Children’s Environmental Health, by the Children’s Environmental Health Network.

Cleaning for Health: Products and Practices for a Safer Indoor Environment, an excellent and thorough review of cleaning products by Inform, Inc.

The Healthy School Network: ways to reduce exposures at school.

The Healthy Building Network: steps to reduce exposures via better selection of building materials and hospital equipment.

A number of organizations offer solid information about ways to reduce pesticide use. Among them:

Mixtures Are the Rule, Not the Exception

John Peterson Myers, PhD, CEO of Environmental Health Sciences

No one experiences just one chemical at a time. Hundreds of synthetic chemicals contaminate every living person. Yet almost all applied and basic science underpinning modern regulation has tested one chemical at a time. 

The graphs to the left are gas chromatographs of baby urine obtained from the diapers of two infants at 1 year of age.  The upper trace is from an infant that was breast fed. The lower was bottle fed. Even at this young age, these babies were carrying many different contaminants.  From Bush et al. 1990

This raises important questions about how chemicals interact with one another. If there is an interaction, do chemicals in combination produce more of an effect than alone, and if so, are the interactions predictable on the basis of the effects of single chemicals, one at a time?

Mixtures are one of the huge unknowns in toxicology. The numbers of chemical combinations experienced by people living in the real world is staggeringly large. With any one person carrying detectable levels of up to several hundred chemicals at one time, and with the mixtures varying from person to person, it is beyond the capacity of modern science to test all mixtures, or even all common mixtures. At the pace of modern regulatory science, it literally would take thousands of years to resolve issues of safety using experimental methodology.

What is known, however, raises disquieting questions. While studies are few, they clearly demonstrate that chemicals can interact with one another in causing effects.

For example, a team led by Dr. Warren Porter, a professor at the University of Wisconsin, reported in September 2002 about the impact of a off-the-shelf mixture of dandelion herbicide available from many local hardware stores. Instead of testing the components of this mixture one-by-one, as do EPA and the companies that sell pesticides, Porter’s team simply used the mixture as it was sold. Their results were dramatically different from what the standard testing reveals. Very low-level exposures to the mixture caused significant reductions in litter size of exposed pregnant mice, at levels where by themselves the components produced no effect.

One of the most elegant experiments of this sort to date has been carried out on mixtures of estrogen with weakly estrogenic contaminants by Rajapakse et al. (2002). They showed that multiple chemicals in combination with one another, each at levels too low to cause a discernible effect by itself, together dramatically increase the response to natural estrogen, 17ß-estradiol.

But while experimentation may be manageable with small numbers of compounds in mixtures, as noted above, humans are contaminated by many contaminants simultaneously, most of which are virtually unstudied with respect to specific endocrine impacts. The elegant models developed by Rajapakse et al. may prove difficult to extrapolate to complex mixtures.

Synergistic interactions are the most problematic, because they indicate that the effects of multiple chemicals together can be significantly more powerful than might be predicted simply by adding up their effects one at a time. Regulatory science rarely incorporates any interactions; it is incapable, at present, of coping with synergies. Thus synergy profoundly challenges traditional risk analysis calculations.
 


Bush, B, RF Seegal, and E Fitzgerald. 1990. Human monitoring of PCB urine analysis. In: Organohalogen Compounds Vol 1: Dioxin ’90–EPRI Seminar, Toxicology, Environment, Food, Exposure-Risk (Hutzinger O, H Fiedler, eds.). Bayreuth: Ecoinforma Press. pp 509-513.

Cavieres, MF, J Jaeger and W Porter. 2002. Developmental Toxicity of a Commercial Herbicide Mixture in Mice: I. Effects on Embryo Implantation and Litter Size. Environmental Health Perspectives 110: 1081-1085

Rajapakse, N, E Silva and A Kortenkamp. 2002. Combining Xenoestrogens at Levels below Individual No-Observed-Effect Concentrations Dramatically Enhances Steroid Hormone Action. Environmental Health Perspectives 110:917–921.

The Precautionary Principle: Protecting Public Health and the Environment

Ted Schettler, MD, MPH
Katherine Barrett, PhD
Carolyn Raffensperger, MA, JD
Science and Environmental Health Network

Adapted from an essay by Schettler et al. in McCally 2002.

The precautionary principle is a guide to public policy decision making (Raffensperger and Tickner 1999, Schettler et al. 2002). It responds to the realization that humans often cause serious and widespread harm to people, wildlife, and the general environment. According to the precautionary principle, precautionary action should be undertaken when there are credible threats of harm, despite residual scientific uncertainty about cause-and-effect relationships.

History of the Precautionary Principle

The term “precautionary principle” comes from the German “Vorsorgeprinzip”—literally, “forecaring principle.” Its origins can be traced to German clean air environmental policies of the 1970s that called for Vorsorge, or prior care, foresight and forward planning to prevent harmful effects of pollution (Boehmer-Christiansen S. 1994). The precautionary principle has since been invoked in numerous international declarations, treaties, and conventions, and has been incorporated into the national environmental policies of several countries. It has been applied to specific decisions on food safety, protection of freshwater systems, land development proposals, fisheries management, and the release of genetically modified organisms, among others.

Formulations of the precautionary principle

One formulation of the precautionary principle is found in the 1992 Rio Declaration of nations participating in the United Nations Environment Program treaty negotiations:

Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation (Rio Declaration 1992) (Shabecoff 1996).

In 1998 a group of scientists, environmentalists, government researchers, and labor representatives from the United States, Canada, and Europe convened at the Wingspread Conference in Wisconsin to discuss ways to formalize and implement the precautionary principle. They formulated the precautionary principle as:

When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.” (Wingspread Statement, 1998)

The precautionary principle says we should attempt to anticipate and avoid damages before they occur or detect them early. However formulated, each version of the precautionary principle is based on underlying values and three core elements:

  • potential harm—predicting and avoiding harm, or identifying it early, should be a primary concern when contemplating an action;
  • scientific uncertainty—the kind and degree of scientific uncertainty surrounding a proposed activity should be explicitly addressed; and
  • precautionary action

Why the Precautionary Principle?

Humans have transformed land, sea, and air, dominating the earth’s ecosystems in unprecedented ways (McCally 2002). Although many of these impacts were or could have been predicted, often they were surprises. Degradation of life-support services, loss of biodiversity, and direct impacts on human health are a result (Lubchenco 1998, Johnson et al. 2001). Patterns of human disease are changing throughout the world. To remain focused on life expectancy and decreases in childhood mortality is to miss these changing patterns.

Newly emerging infectious diseases and new geographical distribution of older infectious diseases illustrate the capacity of microorganisms to evolve and adapt to changing circumstances. Antibiotic resistance is increasingly common. Chronic diseases like hypertension, heart disease, diabetes and asthma are increasing throughout much of the world. Depression and other mental health disorders are becoming new public health threats in many parts of the world with profound consequences for individuals, families and communities. Developmental disabilities, including learning disorders, attention deficit hyperactivity disorder and autism are increasingly common (Schettler et al. 2000). The age-adjusted incidence of a number of different kinds of cancer in the US has increased over the past 25 years (SEER). The incidence of some birth defects is increasing (Paulozzi 1999, Pew 2001). Sperm density is declining in some parts of the world (Swan et al. 1997). Asthma prevalence and severity is sharply increasing throughout the world and is often of epidemic proportions (Pew 2001).

Recognizing the limits of science, the precautionary principle is intended to enable and encourage precautionary actions that serve underlying values, based on what we know as well as what we do not know. It encourages close scrutiny of all aspects of science, from the research agenda to the funding, design, interpretation, and limits of studies, for potential impacts on the earth and its inhabitants.

Elements of the precautionary principle

Underlying values

The precautionary principle contains a specific directive to take precautionary action and, as with all guiding principles, carries its own values. The principle is based on recognizing that some activities may cause serious, irreparable or widespread harm and that people have a responsibility to prevent harm and to preserve the natural foundations of life, now and into the future. The needs of future generations of people and other species and the integrity of ecosystems are worthy of Vorsorge, of forecaring, and of respect.

A precautionary approach asks how much harm can be avoided rather than asking how much is acceptable. The precautionary principle acknowledges that the world is comprised of complex, interrelated systems, vulnerable to harm from human activities and resistant to full understanding. Precaution gives priority to protection of these vulnerable systems.

The potential for harm

Precautionary action is appropriate when there is credible evidence that a particular technology or activity might be harmful, even if the nature of that harm is not fully understood. This means that decision makers must consider potential hazards that have been identified or that are plausible, based on experience, what is known, and/or predicted. Threats of serious, irreversible, cumulative, or widespread harm are of more concern than trivial threats and demand precautionary action commensurate with their nature.

Harm can occur at the level of the cell, organism, population or ecosystem. Impacts may be biological, ecological, social, economic or cultural, and they may be distributed equally or disproportionately among individuals or populations, or geographically, now or in the future. Because systems are complex and outcomes are not always predictable, it becomes extremely important for decision makers to specifically identify the parameters that are used to assess the potential impacts of a proposed activity. Moreover, the standard against which an impact is measured must also be defined. Asking whether a proposed agricultural pesticide is safer for use than another, for example, is very different from asking whether either is necessary at all.

Scientific uncertainty

Recognition of scientific uncertainty is central to the precautionary principle. We are often unable to predict or even identify in advance the consequences of a proposed action in complex biological or social systems (Perrow 1984). When inputs are modified, the behavior of complex systems is often surprising. By the time impacts are documented, considerable harm may have occurred. Despite early warnings, the use of lead in gasoline and paint, for example, damaged the brain function of generations of children (Markowitz and Rosner 2000). Sometimes, a system crosses a threshold and operates at a new state of relative equilibrium from which there is no turning back. For example, exotic species may be introduced into ecosystems where they did not previously exist, allowing them to become established and cause irreversible harm.

Understanding cause and effect relationships in complex systems is limited by different kinds of uncertainties. Uncertainty sometimes results from more than a simple lack of data or inadequate models and is not easily reduced because of the nature of the problem being studied. In those circumstances, a requirement of absolute “proof” of harm before action can be taken is either ideologically motivated or deprived of a fundamental understanding of the limits of science.

Most complex problems have a mixture of three general kinds of uncertainty—statistical, model, and fundamental—each of which should be explicitly considered before deciding how to act.

Statistical uncertainty

Statistical uncertainty is the easiest to reduce or to quantify with some precision. It results from not knowing the value of a particular variable at a point in time or space but knowing, or being able to determine, the probability distribution of the variable. An example is IQ distribution in a population of individuals. In this case, valid decisions can be based on knowing the likelihood of a variable having a particular value.

Typically, however, real-world decisions are made in the context of multiple, interactive variables. For example, the incidence of cancer attributable to exposure to a carcinogen in genetically and geographically diverse people is inherently more difficult to determine than the incidence of cancer in a group of genetically similar rodents exposed to the same carcinogen living in controlled laboratory conditions. When more than one variable is involved, a model is typically constructed with certain assumptions and simplifications, introducing a new kind of uncertainty.

Model uncertainty

Model uncertainty is inherent in systems with multiple variables interacting in complex ways. Even if the statistical uncertainty surrounding the value of a single variable can be defined or reduced, the nature of relationships among system variables may remain difficult to understand. This is particularly problematic for any model of complex systems. We may decide that there will be a tendency for the system to behave in a certain way, but the likelihood of that behavior is difficult to estimate.

Moreover, complex models can include only a finite number of variables and interactions. The real world, however, is a confluence of biological, geochemical, ecological, social, cultural, economic, and political systems. No experimental model can fully account for each of these and their interrelationships. Ongoing research, monitoring, and model refinement may help to reduce uncertainties, but imprecision is inevitable. Indeterminacy, which increases when moving from statistical to model uncertainty, is, at some point, more correctly called ignorance.

Fundamental uncertainty

Fundamental uncertainty encompasses this extension of indeterminacy into ignorance. Ignorance that results from the complexity or uniqueness of a system is of particular concern. This kind of uncertainty is inherent in novel or complex systems where existing models do not apply. Fundamental uncertainty can result from having no valid knowledge of the likelihood of a particular outcome. Fundamental uncertainty can also result from no knowledge of what some of the outcomes may be. Here we don’t even know what we don’t know. Chemical regulators, for example, were unaware of the existence and functions of a stratospheric ozone layer that would be damaged by chlorofluorocarbons when allowing them to be marketed as safe for commercial use. Fundamental uncertainty is extremely difficult to reduce or otherwise manage and demands respect and humility.

Scientific uncertainty and scientific proof

It is imperative to keep these kinds of uncertainty in mind when considering the notion of scientific proof. Proof is a value-laden concept that integrates statistics, empirical observation, inference, research design, and the research agenda into a political and social context. Strict criteria may be useful for establishing “facts”, but by the time a fact or causal relationship has been established by rigorous standards of proof, considerable avoidable harm may already have occurred. The impacts of lead exposure on children’s brain development or asbestos on lung cancer risk are examples. Guided by the precautionary principle, therefore, we are as concerned with the weight of available evidence as we are with the establishment of fact by rigorous standards of proof.

By convention, a considerable amount of consistent evidence is necessary to establish factual “proof” of a cause and effect relationship. Traditionally, in a study of the relationship between two variables, a correlation is said to be statistically significant only if the results show the two to be linked, independent of other factors, with greater than 95% likelihood that the results of the study truly depict the real world. But correlation does not establish causation. In epidemiology, a series of additional criteria, for example, those of Hill, are usually added before causation can be claimed (Hill 1965). Hill criteria include not only establishment of a statistically significant correlation between two variables, but also require that the causal variable precede the effect, a dose-response relationship, elimination of sources of bias and confounding, coherence with other studies, and understanding of a plausible biological mechanism. Tobacco smoking, for example, was known to be associated with lung cancer for more than fifty years before a plausible biological mechanism was finally described. At that point, it became impossible to deny that tobacco “causes” cancer.

When exposure to environmental hazards causes immediate and obvious harm, scientific uncertainty about cause and effect relationships is minimal. However, under other circumstances, scientific uncertainty increases dramatically and is often difficult to resolve.

Conditions with long latency periods between a hazardous exposure and the appearance of an adverse health outcome are difficult to study. Study design is necessarily complex and implementation is expensive. Intervening variables that must be considered in a comprehensive study complicate the analysis. Subjects may also be lost to follow up during a prolonged study.

Investigative challenges are also increased when the health outcome is subtle and detectable only by detailed, complex testing. For example, subtle changes in immune system or brain function may be of significant practical importance but difficult to document easily.

Finally, adverse health outcomes are often non-specific and multifactorial in origin. Many diseases, for example, asthma or developmental disorders like learning disabilities are caused by complex interactions of genetic, environmental, and social factors and are not easily linked to single variables. As a result, determining causation with precision is difficult, if not impossible, and some residual uncertainty will always remain. It then becomes the task of policy makers or health care providers to decide how to act in the face of uncertainty. Under these circumstances, according to the precautionary principle, preventive or anticipatory measures are appropriate.

Meanwhile, lack of “proof” of harm is often used to justify ongoing or proposed activities when the weight of credible evidence suggests that harm is plausible and perhaps even likely. Given the limits of scientific inquiry in the world of complex systems, establishing a high bar of proof as a pre-requisite for taking action is certain, in some instances, to result in unnecessary and often irreversible harm (Beauchamp and Steinbock 1999, Kriebel et al. 2001).

Under the precautionary principle, shifting the burden of proof from one party to another, depending on weight of evidence, lack of evidence, scientific uncertainty, and the nature of the harm of concern is one way to address these complexities. Depending on circumstances, differing standards of evidence for demonstrating harm (or safety) may also help protect public health and the environment. For example, concluding that something is more likely than not to cause harm is a very different standard from concluding that it will cause harm beyond a reasonable doubt.

The precautionary principle reminds us that, when dealing with complex systems, evidence is rarely sufficient to quantify or predict the consequences of human activity beyond doubt. Yet, failure to take action, because of a lack of quantifiable proof of harm, is, in itself, a form of action.

Precautionary action

Finally, in order to serve the values that underlie the precautionary principle, action should be anticipatory, in order to prevent harm to public health and the environment, despite underlying scientific uncertainty. The precautionary principle does not specify which actions are appropriate under particular circumstances. It is a guiding principle, not a set of binding rules. The choice(s) among potential anticipatory actions, however, should be informed by:

  • full consideration of the weight of evidence for potential harm
  • the kind and degree of scientific uncertainty associated with that evidence
  • participation of potentially affected parties, and
  • an assessment of potential alternative actions.

Implementing the precautionary principle

The precautionary principle requires a systematic look at the potential for various kinds of harm, associated scientific uncertainty, underlying fundamental values, and then counsels precautionary action.

1) Goal setting

Goal setting is particularly important for establishing environmental and health policies. Goal setting requires us to ask, “Where do we want to be at some future time? What are we trying to accomplish? Starting with agreed upon goals and then looking at where we are now can help in developing a strategy for getting from here to there. Of course, not all goals are generally agreed upon or represent shared visions. But as goals are made explicit, the values and assumptions underlying decision-making processes will also become more transparent and may result in processes for reconciling differences.

2) Assessing alternatives

A truly precautionary approach includes examination of a range of options for meeting policy goals. Currently, in most settings there are few requirements for comprehensively assessing a range of alternatives to proposed activities. For example, current regulatory policies emphasize a risk assessment/risk management framework. This approach attempts to estimate the probability of harm (risk) from a proposed activity and then asks whether that harm is acceptable. Risk management techniques are intended to minimize the risks of the proposed activity, but not to question if the activity is necessary for achieving broader goals.

Alternatives assessment instead asks whether the harm is necessary and if there might be other ways to achieve agreed upon goals that would avoid harm altogether. When alternatives assessment is applied earlier rather than later in policy decision making, innovative approaches that reflect societal goals, ecological principles, and the values that underlie the precautionary principle are more likely to emerge. Assessing alternatives can also lead to actions that truly respect the level of uncertainty in given circumstances.

3) Adopting transparent, inclusive, and open processes

A precautionary approach requires open, inclusive, and transparent processes that are initiated early in decision making, beginning with goal setting, where the health and well-being of the public and environment are at stake. A participatory approach is justified by a belief in the fundamental fairness of democratic decision making and by the thought that a broad range of experience leads to better science and decision-making. Transparency also helps to ensure accountability among decision-makers.

4) Analyzing uncertainty

A precautionary approach requires explicit recognition of the scientific uncertainty inherent in understanding the potential for harm from an ongoing or proposed activity.

Statistical uncertainty may be reduced with more data collection. Model and fundamental uncertainty, however, are more difficult to reduce. When model or fundamental uncertainty predominate, a requirement to resolve uncertainty as a pre-requisite for decision making shows a fundamental lack of understanding of the limits of science or alternatively, may be nothing more than a tactic to maintain the status quo. In the US, for example, President Bush used scientific uncertainty about the impacts of greenhouse gases on global climate to justify US rejection of the Kyoto treaty on global climate change and to promote an energy policy weighted heavily in favor of increasing fossil fuel extraction and consumption.

5) Burden of proof and responsibility

Under the precautionary principle, the burden of proof regarding the safety of an activity may shift with the nature of potential harm and scientific uncertainty. Requirements for evaluating the safety of a proposed activity also vary with the political context. The precautionary approach suggests that the burden of proof is better thought of as the burden of persuasion and responsibility. This avoids the fruitless assertion that absolute safety can never be “proven.” Rather, it acknowledges that, as the potential for serious, irreversible harm and scientific uncertainty increase, the proponent of an activity has an increasing obligation to account for the safety of the activity and take responsibility for adverse impacts that may result from it. Then, more comprehensive testing, monitoring, and assumption of liability shift the onus onto the proponent.

6) Learning and adaptation

Under the precautionary approach, appropriate research and monitoring are essential. Decisions must be periodically re-examined, based on new information. The research agenda of private and public institutions may be designed to reflect broad social goals that extend well beyond developing marketable products. In this way the precautionary approach is designed with feedback loops that search for and take into account new information and unintended consequences of provisional decisions.

7) Options for precautionary action

Choices among potential precautionary actions are made only after full analysis of potential harms and scientific uncertainty. Precautionary action can take a number of directions. At the level of regulation, when research and development of a product or technology are complete, and only regulatory approval is needed for production and marketing, the options are ordinarily limited to yes, no, with limits, with monitoring, with labeling, or with posting of a performance bond.

At a pre-regulatory level, however, precautionary action might include a closer look at problems that proposed technologies are intended to solve. How was the problem defined and by whom? Was the problem framed in the only or best way? Are there alternatives to the proposed technology

Evaluating a full range of possible precautionary measures again requires a multidisciplinary, participatory approach in order to elicit relevant knowledge and set priorities. Responses to scientific uncertainty, as well as various kinds of harm, legitimately vary among individuals, societies, and cultures. It is obviously easier to consider alternatives, multiple sources of information, and priorities earlier in the process than when a developed product or technology is presented for regulatory approval.

The Relationship of the Precautionary Principle to Risk Assessment

A risk assessment approach to public policy decision making dominates in the US and many parts of the world. With few exceptions, risk assessments attempt to estimate the potential risks of proposed products or activities on a case-by-case basis without consideration of the complete context in which the activity will be carried out and rarely with any consideration of alternatives to the proposal (O’Brien 2000).

The relationship between the precautionary principle and risk assessment reflects differing views of a number of factors, including how much we know, how much we can know, how broadly questions should be framed, which questions should be asked, who should frame the questions, the value of non-human life, our responsibility to future generations, and how we plan for the future.

Quantitative risk assessments usually respond to narrowly-framed questions and are often flawed by simplifying assumptions. Risk assessments almost always fail to consider a full range of biological, ecological, social, cultural, and economic impacts and how they are distributed. Advocates of a regulatory system dominated by quantitative risk assessments argue that they are inherently precautionary through the use of conservative assumptions and safety factors. Risk assessors, however, often fail to distinguish among various kinds of uncertainty and tend to misclassify some model and fundamental uncertainty as statistical, to which they apply “uncertainty” factors. When model and fundamental uncertainty predominate in a system, this approach may lead to large underestimates of risk, failure to predict adverse impacts removed in time and space, and of course, will completely fail to predict surprises or novel impacts.

Risk assessors often claim that the precautionary principle is “anti-science” or a tool to keep certain technologies from the marketplace. In fact, a precautionary approach encourages more science rather than less, acknowledging the need for precautionary action while addressing scientific uncertainty that may be intractable using available tools.

Decision making in the face of uncertainty is, of course, necessary, frequently difficult, and requires assessment of relative risks. Guided by an overarching precautionary principle, however, these assessments are not the exclusive domain of risk analysts, can be fully participatory and include a full consideration of a range of alternatives.

Conclusion

A precautionary approach is based on the ethical notions of taking care and preventing harm. It arises from recognition of the extent to which scientific uncertainty and inadequate evaluation of the full impacts of human activities have contributed to ecological degradation and harm to human health. It can be used to help address these circumstances, bringing together ethics and science, illuminating their strengths, weaknesses, values, or biases. The precautionary principle encourages research, innovation, and cross-disciplinary problem solving. It serves as a guide for considering the impacts of human activities and provides a framework for protecting children, adults, other species, and life-sustaining ecological systems now and for future generations.


 

For example: Montreal Protocol on Substances that Deplete the Ozone Layer (1987); Ministerial Declaration of the Second World Climate Conference (1990); Bergen Ministerial Declaration on Sustainable Development (1990); Bamako Convention on Hazardous Wastes within Africa (1991); Framework Convention on Climate Change (1992); United Nations Conference on Environment and Development (1992); Helsinki Convention on the Protection and Use of Transboundary Watercourses and International Lakes (1992); Maastrict Treaty on the European Union (1994); US President’s Council on Sustainable Development (1996); Cartagena Protocol on Biosafety (2000); Stockholm Convention on Persistent Organic Pollutants (2001).


 

References

Beauchamp DE and B Steinbock (eds.). 1999. New Ethics for Public’s Health. New York:Oxford University Press.

Boehmer-Christiansen S. 1994. The precautionary principle in Germany—enabling government. In: Interpreting the precautionary principle. Ed: O’Riordan T, Cameron J. Earthscan Publications, Ltd.; London.

Hill AB. 1965. The environment and disease: association or causation? Proc R Soc Med 58:295.

Johnson N, C Revenga and J Echeverria. 2001. Managing water for people and nature. Science 292:1071-1072.

Kriebel D, J Tickner, P Epstein, J Lemons et al. 2001. The Precautionary Principle in Environmental Science. Environ Health Perspect 109(9):871-876.

McCally M. (ed.) 2002. Life Support: The Environment and Human Health. Cambridge, MA:MIT Press.

Lubchenco J. 1998. Entering the century of the environment: a new social contract for science. Science 279:491-497.

Markowitz G and D Rosner. 2000. “Cater to the children”: the role of the lead industry in a public health tragedy, 1900-1955. Amer J Pub Health 90(1):36-46.

O’Brien M. 2000. Making Better Environmental Decisions: An alternative to risk assessment. Cambridge, MA, MIT Press.

Paulozzi L. 1999. International trends in rates of hypospadias and cryptorchidism. Environ Health Perspect 107(4):297-302

Perrow C. 1984. Normal Accidents: Living with High Risk Technologies. New York: Basic Books.

Pew Environmental Health Commission.

Raffensperger C, and J Tickner. (eds). 1999. Protecting Public Health and the Environment: Implementing the Precautionary Principle. Ed: Island Press, Washington DC.

Rio Declaration on Environment and Development. 1992.

SEER Cancer Statistics Review, 1973-1996. Bethesda MD: National Cancer Institute.

Schettler T, K Barrett and C Raffensperger. 2002. The Precautionary Principle. In: Life Support: The Environment and Human Health. Ed: McCally M. Cambridge, MA: MIT Press.

Schettler T, Stein J, Reich F, Valenti M. 2000. In Harm’s Way: Toxic Threats to Child Development. Greater Boston Physicians for Social Responsibility.

Shabecoff P. 1996. A New Name for Peace: International Environmentalism, Sustainable Development, and Democracy. Hanover and London: University of New England Press pp 86, 156, 172.

Swan S, E Elkin and L Fenster. 1997. Have sperm densities declined? A reanalysis of global trend data. Environ Health Perspect 105:1228-1232.

Does “the Dose Make the Poison?”

John Peterson Myers, PhD
CEO of Environmental Health Sciences

A core assumption of traditional toxicology is “the dose makes the poison.” Generations of toxicologists have begun their studies by learning this, countless experiments have provided support, and the laws protecting people from undue exposure all assume that it is true.

“The dose makes the poison” is taken to mean that the higher the dose, the greater the effect. And this implies that low exposures are less important. Indeed, based on “the dose makes the poison”, it is commonly argued that “background” levels of contamination aren’t worth worrying about.

Yet new evidence emerging from modern scientific research that combines toxicology, developmental biology, endocrinology and biochemistry is demonstrating that this assumption is wrong, at least in its simplest and most-widely used form. And the implications for this new realization are profound, because it means that the safety standards used to protect public health are built upon false assumptions and likely to be inadequate.

Two core patterns in this emerging research violate simplistic uses of “the dose makes the poison.”

  • One arises because sensitivity to contamination is not the same at all stages of the life of an individual. The same low dose that may pose no risk to an adult can cause drastic effects in a developing fetus.
  • The second involve dose-response curves in which low levels of a contaminant actually cause greater effects than higher levels, at the same stage of development. These dose-response curves, shaped like inverted-U’s, are called “nonmonotonic dose-response curves.”

Both of these patterns require a more sophisticated view of what it means for “the dose makes the poison.”

In the case of sensitivity varying from one stage of development to the next, “the dose makes the poison” is valid as long as one doesn’t wrongly assume that measurements at one stage can be extrapolated to another. The assumption holds true (as long as there is no nonmonotonic dose response curve, see below) within a stage of development, but not among them.

A recent dramatic example of this differential sensitivity was found in work comparing the impact of an herbicide on tadpoles vs. frogs. In frogs, the change from tadpole to frogs is exquisitely sensitive to chemical disruption of development. A dose of atrazine (a commonly used herbicide) 30,000 times lower than the lowest level known to affect adult frogs caused 20% of tadpoles to become hermaphroditic (containing both male and female sexual organs) in adulthood.

This pattern seen in frogs is not an exception. The scientific literature is full of examples demonstrating that in its early stages of development and organism can be more vulnerable than during adulthood. Thus it is important to realize that “the adult dose does not make the fetal poison.”

Inverted-U or nonmonotonic dose-response curves (NMDRCs) provide a more difficult challenge to the traditional interpretation of “the dose makes the poison,” i.e., that higher doses have greater impacts to lower doses. In NMDRCs, lower doses can have larger impacts than higher doses. One recent example arose in work on proliferation of prostate tumors:

A very low dose (1 nanomolar) of bisphenol A induces a stronger response than a much higher dose (100 nanomolar). The response to 1 nM is significantly greater than the control. 

Many examples of NMDRCs are now being published in the scientific literature (more). This raises three questions:

Why were they not found commonly before? Several factors may have contributed to the infrequency with which NMDRCs were reported previously in the scientific literature.

  • One may be simply that few scientists looked. Driven by “the dose makes the poison,” toxicologists would perform experiments at higher doses and work down the dose-response curve until they found a level at which no response was detectable. Experiments at doses 1/10th to 1/100th of that no-response level made no sense. But without experiments at much lower doses, the low-dose effects of NMDRCs could not be detected.
  • A second impediment arose from the statistical design used to analyze results in toxicology. Designs built on the assumption that “the dose makes the poison” are unlikely to find NMDRCs.

Why do they occur? This is an active area of research. Several ideas have been offered.

  • One is that within the range of very low doses showing NMDRC patterns, enzymatic defenses against chemical contaminants are not activated. The supposition here is that at these very low levels, the contaminants are at levels that are within the range where their biological activity resembles the normal hormonal mechanisms controlling development. As contaminant levels rise, defense mechanisms are activated, shutting down the original response.
  • Another is that as the low dose rises into a higher range, the contaminant stimulates new responses, perhaps activating different hormonal pathways that then operate in a negative feedback loop to shut down the system involved in the original response.

What do they mean for public health? NMDRCs are extremely troubling for regulatory toxicology because their presence undermines the validity of generations of toxicity testing that have been based on the assumption that “the dose makes the poison.” Prevailing federal safety standards are built upon research methods that are unlikely to find low-dose effects, and very few chemicals have been tested in ways that would reveal them.

For that reason NMDRCs were the subject of intense debate among scientists as it became clear they were not uncommon. The US National Toxicology Program went so far as to convene a special “low-dose panel” of scientists to conduct a full scale review. The panel’s findings, published in 2001, confirmed the reality of NMDRCs.

So what do NMDRCs mean for “the dose makes the poison”? In a literal sense, the dose still does, as for example, in the graph of prostate tumor proliferation above: A dose of 1 nanomolar bisphenol A produces a different response than does 100 nanomolar. Dose does matter. But with BPA and prostate proliferation, “a very low dose makes a higher poison.” It is no longer safe to assume that lower doses have lower impacts than higher doses. The science used to establish public exposure standards needs to incorporate this new concept.