Morality, Religion and Experimenting on You

To improve the process of ethical oversight of research, we need to change our attitudes, and recognize far more fully that complicated moral issues, strains and vagaries are involved.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

To help their soldiers on the front, Nazi physicians sawed off the limbs of concentration camp prisoners and then tried to reattach these body parts, but failed. Other prisoners were forced into the snow to measure how long it took for them to freeze to death.

In response, the Nuremberg Tribunal developed the first moral guidelines on how to conduct experiments.

Since then, science has grown enormously, improving our lives in areas from cancer to depression.

But experiments on humans have become not only more common, but more complicated and controversial, often raising profound moral dilemmas.

The pharmaceutical industry, rather than the NIH, now funds most biomedical research, and conducts most of its studies in developing countries, rather than in the U.S.

But deep moral challenges emerge -- whether, for instance, experimental drugs, if proven effective, should be made readily available to these poorer populations, many of whom lack important health care. To what degree should these companies be responsible if experiments kill patients, and should that obligation differ if the patients are American or Ugandan? Companies often require that study patients in the U.S. have sufficient health insurance before enrolling, excluding patients without adequate coverage. But is that unfair?

Informed consent forms are now often 40 pages, crammed with scientific and legalistic jargon that most patients don't understand. Comprehension problems burgeon in the developing world, where many subjects are only semi-literate. Researchers routinely enroll into studies patients who don't fully understand the informed consent. Drug companies now pay thousands of dollars to doctors to switch patients from generic medications to more expensive experimental drugs that may work less well. Facebook has conducted studies on users, successfully altering their mood, without their knowledge.

Some people see Nazi crimes as morally evil, but view these experiments as merely business practices that may be a bit unfair. The two sets of activities clearly differ in magnitude, but both pose certain underlying moral concerns: how much obligation do we have to each other? Is it ever ok to harm another person, and if so, when?

These are fundamentally moral questions, long central to religion, but increasingly posed in hospitals and clinics. In our modern secularized world, how are they now addressed, and how should they be?

In 1974, a journalist revealed the moral lapses of the Tuskegee syphilis study in which researchers, funded by the U.S. government, studied the course of this disease in African-American men in the South. When penicillin became available as a definitive treatment, the researchers decided not to mention or offer it to the men, since doing so would destroy the experiment.

In response, Congress passed the National Research Act, which led to the creation of research ethics committees, known as institutional review boards, or IRBs, to oversee the ethics of research. Today, the U.S. has around 5,000 of these committees. Every hospital and college has one, or uses one elsewhere.

Yet increasingly, these boards have themselves become criticized. They frequently operate wholly behind closed doors; and have approved research that violated ethical guidelines, and delayed or blocked other, important studies.

Since the Act passed, science has changed. Many studies now involve 40 different hospitals, but must then be approved by 40 committees, which often disagree, requiring changes in varying parts of the study, such that the data between these sites cannot be compared or combined.

These committees tend to assume that they each are always right -- that they each represent their local community values, and thus cannot be challenged, even if they disagree with each other. If an IRB adamantly rejects a study, the researcher is stuck. No external appeals process exists.

To address these problems, President Obama proposed changes, including using a single centralized IRB for multi-site studies; and in February 2015 sent a revised set of recommendations to the Office of Management and Budget. But the content of these revisions remains secret. In December 2014, the NIH also recommended such centralization. A period of public comment ended earlier this year; and the proposals are now being revised.

But while these recommendations can help certain studies, they are insufficient, and miss larger points: that deep moral quandaries, ambiguities and tensions are often involved.

IRBs wrestle with dilemmas that often lack a single right answer. Committees struggle their best, but often don't have any training in moral reasoning.

Medical centers want certitude, one and only 'right' answer, and technological fixes. Yet recent research I conducted found that these committees disagree due not to local community values, but because of other, idiosyncratic reasons. IRBs even in the same institution and community often disagree with each other. Rather, committees vary due to the personalities of whoever happens to be a member, and whether the institution has recently faced a scandal or lawsuit.

When confronting moral dilemmas, many people look for answers to the Old or New Testament, the Quran, or sayings of the Buddha. But these modern scientific problems did not exist when these religious documents were created. We may be able to draw some broad principles, but are left to wrestle with the novel complexities and challenges of each case.

Unfortunately, we often lack critical tools and skills to address these complex moral decisions. Most universities do not require any classes in moral values or decision making. We follow our implicit gut feelings, but need to do more.

To improve the process of ethical oversight of research, we need to change our attitudes, and recognize far more fully that complicated moral issues, strains and vagaries are involved. We need to require ethical training for IRBs, and a broad open discussion about the underlying social and moral tensions involved.

Such improvements can aid not only research subjects, but our health, science and moral lives - as individuals and as a country and world as a whole.

Popular in the Community

Close

What's Hot