05/23/2011 04:20 pm ET | Updated Jul 23, 2011

The Danger of Industrial Optimism: The Challenger Syndrome

On January 28, 1986, the Challenger space shuttle exploded 73 seconds after liftoff, killing all seven astronauts and prompting a full-scale investigation of the tragedy. The Rogers Commission found that the technical cause of the accident was the failure of two rubber O-rings to seal properly in the solid-rocket booster, allowing its hot gases to burn through and ignite the main fuel tank. Testimony before the commission revealed not only that NASA and Morton Thiokol, the contractor for the solid rocket motors, had known about this O-ring problem for years, but that they had accepted a major -- and tragic -- alteration in its procedure for this flight. For the first time in shuttle history, the contractor was asked to prove that it was NOT safe to fly instead of the usual demand to prove it was safe to fly. Since the engineers could not prove the shuttle would explode if launched, the "go" was given.
This 180-degree turn-around in framing the decision is worth recalling as we confront current industrial and scientific endeavors. Two cases illustrate the point that the "Challenger Syndrome" drives private and public sector thinking.

• Nanotechnology -- the manipulation of matter at the atomic scale to create materials and machines that cannot be seen without the most powerful microscopes -- is a burgeoning field for commercial and military applications. The federal government is spending over $2 billion a year on nano, and 25 different agencies are involved. Worldwide, over $13 billion is spent yearly in the equivalent of a twenty-first century "space race" that has seen the number of nano patents grow exponentially. The promise of nano is exciting. Nanoparticles that target specific cancers can theoretically be injected and then heated to kill only cancer cells. Nanotechnology offers the potential for mass-produced solar power at a fraction of the cost of current solar panels. Yet there are troubling questions. No one is sure what happens to nano materials (smaller than a cell) when they enter the bloodstream or environment. We know they can cross the blood-brain barrier. Do the nanoparticles now put in many sunscreens (to make them less pasty than the traditional version) pose a health risk? What happens to marine life (and ultimately people) when nanosilver, which has great potential to clean water, enters streams and rivers? Nano is now almost entirely unregulated as the government struggles to define when a product with nano materials needs to be labeled as such and tested for its impact before widespread commercial use.

• Hydraulic fracturing -- the use of horizontal drilling and pressurized, chemical-filled water to release natural gas in shale deposits -- is also growing exponentially. More than 14,000 wells have been drilled in Texas since 2002 and more than 1,500 were drilled in 2010 in the Marcellus Shale which underlies much of western Pennsylvania, West Virginia, western and southern New York and eastern Ohio. Thousands more such wells are on the way, given the huge volume of rather cheap natural gas to be extracted and the royalties paid to property owners willing to allow drilling, literally in some cases in their own back yards. Yet evidence is mounting that the chemical mix in the pumping water (most of whose compounds companies do not be make public) can cause disease and pollute groundwater. Cases where tap water has exploded on shocked residents because of gas released in it have been reported. The EPA still lacks sufficient authority to regulate this practice under the Clean Water Act, despite the fact that watersheds that could be negatively affected serve heavily populated areas.

In both these cases, commercial and economic pressures drive practices that hold potentially widespread though still largely unproven dangers. There are other examples. The application of biosolids -- partially treated human and industrial waste that includes heavy metals -- as fertilizer on farms is commercially attractive and growing, even though a 2002 National Academy of Sciences study concluded that: "There is uncertainty about the potential for adverse human health effects from exposure to biosolids." We cannot prove that biosolids are dangerous, so their use grows.

Commercial and private activities that generate carbon dioxide are still permitted without major curbs on emissions because, many claim, we cannot prove that increased atmospheric levels of carbon dioxide are dangerous to human health or the environment. No one could prove that Deepwater Horizon would explode or that the blowout preventer would fail, so drilling continued.

The Challenger Syndrome puts us in a box: we must prove something is unsafe, that it will damage human health or the environment, before we can stop it. For the Challenger crew, that was tragic. For us as a nation, it could be catastrophic. Unlike Challenger, whose impact was limited, damage done by nanotechnology, fracturing, biosolids, and carbon dioxide would be systemic and widespread -- as the Gulf oil spill demonstrated. When Challenger blew up, NASA put a hold on further launches until it could solve the problem. If groundwater gets polluted, toxic materials enter the cells of living things, or atmospheric carbon dioxide becomes too great, there is no way to stop the impact until a fix is found. As science and technology advance to where they can impact the frontiers of scale -- the atmospheric, oceanic and cellular -- the potential dangers grow.

This concern will generate the traditional response of a society where individual (and corporate) freedom is pitted against community interests: if we don't do these things (nanotechnology, fracturing, etc), someone else will and/or we will pay more, lose jobs, and miss out on the promise of science and technology. These are not counterarguments to be ignored, but this need not be an "either-or" issue. We can give sufficient attention and resources to study the public safety, health and ethical dilemmas attendant on new technologies before widespread use, though industry will need to be forced to do so by government regulation or funding. We can account for the costs of possible catastrophic failure in cost-benefit analysis. We can limit application until we are surer of its impact. Until we do these things, the economics of potential gain will continue to drive decision making. We will be "go" for the launch of new technologies, a "can-do" optimism that is as dangerous as it is alluring.