Think for Yourself: Is it Regrettable but Acceptable to Kill Civilians?

Think for Yourself: Is it Regrettable but Acceptable to Kill Civilians?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

This past weekend a U.S. air strike killed at least seventeen persons although it did not kill the intended target, an al Qaeda top operative who did not show at the event as expected. The question to us as citizens is whether such errors are a regrettable but reasonable price in the overall effort to stop terrorists.

Many critical policy decisions are made out of fear of error. What most persons do not realize is that there is usually the opposite error also very much possible. In this case, the opposite error is not striking when we think we have a reasonable chance to do so, and thus allowing the targeted person to continue to organize murders, perhaps mass murders. More common examples of the tow types of error would be when jurors listen to all the evidence presented in court and then decide "guilty" or "not guilty"; voters decide "yes" or "no" to ballot initiatives; leaders decide to "invade" or "not invade." In each of the examples presented here, there are two opposite errors that could be made, such as in the first example: the person might have been found guilty when they were, in fact, not guilty; or, conversely, they were found not guilty when they were, in fact, guilty. Because the information necessary to make a decision is almost never perfect, even in the hands of the wisest, most fair, and objective person, there will almost always be a risk of both types of errors. Over time, and with many such judgments, both types of errors will occur.

Our first frustration is with data that is, in any moment, irreducibly imperfect. Obviously, we want better information on which to base our decisions. And perhaps through technology, or better systems, or human education, we can get better information over time for future cases. But often we find that we must decide now, with the information that we do have, and, in the long run, with many such cases, the two types of errors are inevitable.

The next frustration for us is that in the present context of available data, there is no way to reduce the overall amount of error. We can only shift it from one type of error to the other. If we insist on more reliable data before firing the weapon, so as to reduce the error seen this weekend, then automatically and unavoidably we increase the opposite error - that the targeted persons walk away. To make errors a certain percentage of the time is unavoidable, you can only choose which type of errors you wish to make less of and automatically, then, which you will make more of.

Faced with making one of these two types of errors, how much evidence does one require before acting? The decision depends on which error one fears the most and which success brings the greatest gain. These are value judgments that the decision-makers face. Perhaps in death-penalty cases in our domestic courts we decide that we have a greater fear of finding an innocent person guilty than we do of finding a guilty person innocent. Thus, we may choose to have a very high standard of evidence for finding guilt and reduce the risk of declaring an innocent person guilty. But, in using such a high standard, we also automatically raise the likelihood of finding a guilty person not guilty. Over time, we make plenty of errors, but we make less of the error that we fear the most. Our culture has a particular value system that weighs these errors against one another; China has an infamously different assessment of those two errors, and the Chinese act accordingly. If culture determines the relative value of the errors, then we must consider if civilian culture is different than military culture: it is possible that what strikes civilians as the appropriate amount of evidence required before pulling the trigger in a backyard argument is different than the appropriate amount of evidence required before pulling the trigger in a war.

Each type of error has a constituency. In the decision to target the operative last weekend, the Americans and Europeans who are likely to be the next victims of a terrorist attack will view the two possible errors differently than the villagers nearby the targeted site. If one constituency's concern is more influential in the decision-making process, then the standard for evidence will be set in their favor. Of the two constituencies involved in the error of civilian deaths this past weekend, one has a much greater influence in Washington, D.C. It is no surprise that the decision was made accordingly.

The third frustration is that better information often comes to light after the decision has been made. At that point, we are glad to have more information, we just wish it had been available sooner. It is tempting to pass judgment on the decision and the decision maker after better information is available, but the person can only be held responsible for the information that was available to him or her at the time. (However, he or she is, of course, also responsible for his earlier decisions that led to the type of information he had available for the present decision.) Finding more evidence after an honest and unprejudiced judgment is useful, historically, and may warrant a reversal of decision after. It may inform us as to how we may have better information for such decisions in the future. But it does not add to the calculation of the competence of that original decision per se. Unprejudiced judgments are, well, judged in light of the information actually in hand at decision-making time.

The fourth frustration is that the decision maker is not always the "wisest, most fair, and objective person." Should a judge whose family has recently been assaulted preside over an assault case? Perhaps she'll chose to rush a verdict that could have reasonably waited for better information; perhaps she'll allow some types of evidence preferentially; or perhaps she'll misinterpret the evidence by ignoring something that was valid or exaggerating something else beyond its merits. There are reasons enough for competent judges to make errors (such as having incomplete or erroneous information), so to compound them with a prejudiced or corrupt system of gathering and assessing information is devastating to the integrity of the decision-making process.

How then do we manage these two types of errors? When a speaker points out one type of error--the welfare queen, for example, or the apparently reformed felon who's been sitting in jail for decades rather than having been released--we must acknowledge that the opposite type of error exists simultaneously and look for the evidence of how often we make each type of error and if that seems to us the optimal, if imperfect, balance in an imperfect world. We must assess the balance or prejudice with the information that was collected and analyzed and seek to make improvements in the system and the community so that such uncertain decisions will arise less frequently. What a rational society should not do is act appalled and outraged with every error - they are unavoidable here and now. We may take steps now to improve in the long run our data collection and verification systems, our insight and nuance of judgment, but that is seldom or never possible in the brief moment in which critical decisions must be made.

When we find ourselves opining and voting out of fear or outrage about a particular error, we owe it to ourselves to consider the opposite error as well and whether we are comfortable with the overall balance in a world of unavoidable error and inevitable injustice.

Popular in the Community

Close

What's Hot