My colleague Thomas Seager and I recently co-wrote "Digital Jiminy Crickets," an article that proposed a provocative thought experiment. Imagine an app existed that could give you perfect moral advice on demand. Should you use it? Or would outsourcing morality diminish our humanity? Our think piece merely raised the question, leaving the answer up to the reader. However, Noûs -- a prestigious philosophy journal -- published an article by Robert J. Howell that advances a strong position on the topic, "Google Morals, Virtue, and the Asymmetry of Deference". To save you the trouble of getting a Ph.D. to read this fantastic, but highly technical piece, I'll summarize the main points here.
It isn't easy to be a good person. When facing a genuine moral dilemma, it can be hard to know how to proceed. One friend tells us that the right thing to do is stay, while another tells us to go. Both sides offer compelling reasons -- perhaps reasons guided by conflicting but internally consistent moral theories, like utilitarianism and deontology. Overwhelmed by the seeming plausibility of each side, we end up unsure how to solve the riddle of The Clash.
Now, Howell isn't a cyber utopian, and he certainly doesn't claim technology will solve this problem any time soon, if ever. Moreover, Howell doesn't say much about how to solve the debates over moral realism. Based on this article alone, we don't know if he believes all moral dilemmas can be solved according to objective criteria. To determine if -- as a matter of principle -- deferring to a morally wise computer would upgrade our humanity, he asks us to imagine an app called Google Morals:
When faced with a moral quandary or deep ethical question we can type a query and the answer comes forthwith. Next time I am weighing the value of a tasty steak against the disvalue of animal suffering, I'll know what to do. Never again will I be paralyzed by the prospect of pushing that fat man onto the trolley tracks to prevent five innocents from being killed. I'll just Google it.
Let's imagine Google Morals is infallible, always truthful and 100% hacker-proof. The government can't mess with it to brainwash you. Friends can't tamper with it to pull a prank. Rivals can't adjust it to gain competitive advantage. Advertisers can't tweak it to lull you into buying their products. Under these conditions, Google Morals is more trustworthy than the best rabbi or priest. Even so, Howell contends, depending on it is a bad idea.
Although we're living in a time where many worry about attacks on cognitive liberty, Howell isn't concerned about Google Morals chipping away at our freedom. Autonomy isn't on the line because users always get to to decide whether or not to use the tool. In the thought experiment, nobody is mandated to use the app like forced Ritalin or anti-psychotics to aid deficient or dangerous judgment. Indeed, even if more detail were filled in, and it turned out that intense social pressure arises to use Google Morals, one could still contend the buck stops with the individual decision-maker. Moreover, once Google Morals offers its wise advice, the user still gets to determine whether or not to listen. Google Morals can tell you to order the vegetarian entry, but it can't stop you from smashing the phone with one hand while shoving cheeseburger down your mouth with the other.
Howell's concerns address the challenges Google Morals presents to virtue and the knowledge that accompanies moral excellence. Specifically, he argues deferring to Moral Google can reveal pre-existing character problems and stunt moral growth, both short and long term.
Let's start with Howell's first concern, which we could call the pre-existing condition test. Children start life selfish and uninformed. When parents teach them how to be responsible, they begin by issuing threat-based, categorical commands, such as: "Share your toys, or else!" Parents expect the commands to be followed, even before their kids are old enough to understand why they are being issued. Beginning moral education this way through deference doesn't pose a problem, so long as the strategy is used as a developmental stepping-stone. At some point, the kids will get older and they'll be held to new, age-appropriate standards. If no-longer-little Johnny or Susie are unable to make well-informed, independent moral judgments, they'll be criticized for being immature. Perhaps their parents will be blamed for encouraging them to remain in a regressive state.
With this conception of development in mind, if an adult defers to Google Morals because he or she doesn't know a morally relevant fact, that consultation might be deemed fine. Some facts are hard to come by. Or, a situation might arise that is so complicated deference is needed. Good faith effort doesn't result in the person knowing how to separate relevant from irrelevant detail, or how to differentiate primary from secondary concerns. While other acceptable situations might exist, too, a problem arises when deference occurs without the presence of "natural virtue," i.e., "dispositions to feel and act appropriately, as well as have certain intuitions and pro-attitudes".
For example, if you need to ask Google Morals whether it is wrong to tell a lying promise to your best friend, you display callousness and disloyalty. This, in turn, reveals poor habituation and the influence of an underdeveloped character. Being ignorant of certain morally relevant facts might fit this bill, too. For example, not knowing that animals can suffer isn't the same thing as not knowing the population of the state you live in. Whereas the former is a trivial issue, the latter reveals a lack of basic moral interest in the world.
On the matter of how Google Morals might stunt moral growth, Howell identifies a number of pitfalls. To clarify the ones concerning how moral knowledge might be impeded, it is useful to begin with a caveat.
On the one hand, Howell is careful to avoid asserting that using Google Morals necessarily short-circuits our ability to do the right thing for the right reasons. Let's say Google Morals tells you that lying is wrong. Sure, you can walk away muttering an appeal to authority: "Lying is wrong just because Google Morals says so." But, you could also walk away saying, "Because Google Morals only tells the truth, lying must be wrong in principle." And then you can ask yourself -- or others -- what principle is at stake. Google's authority would then inspire genuine moral education, and the user would be on his or her way to obtaining "complete" virtue.
On the other hand, Howell claims that some people who use Google Morals might not exert the effort to learn why a course of action is good or bad. Presumably, he has cognitive biases in mind, perhaps inertia, that account for why this might be a regular occurrence. Additionally, Howell notes, even if Google Morals explained why it views 'this' as wrong and 'that' as right (maybe the app could give users the option of whether to receive stand alone advice or advice with supporting explanation), users might be unable to integrate it with the rest of their moral knowledge. For example, based on an interaction with Google Morals, they might understand why it is wrong to steal from strangers, but not see the connection to other cases, like stealing from friends, family, and acquaintances. These broader connections often come from extended conversation. However, open-ended moral dialog isn't part of Google Moral's programming. Google Morals is more like a moral search engine than chat bot.
Moreover, given the accuracy of Google Morals, users might become disinclined to try to put in the effort required to discover the extended broader connections. If this happens, they could become diminished in several ways. They might stop developing new moral beliefs, an outcome that fosters the vice of intellectual laziness. They also could become overly disposed to look to others for guidance, while perhaps shying away from taking initiative to offer others moral recommendations. Such dispositions can adversely impact character, bolstering such vices as weakness of will, selfishness, and unreliability. Additionally, users might start seeing moral problems in isolation from each other, in which case their self-understanding will suffer. Unless a person treats moral understanding as a matter of developing a consistent and coherent outlook, it is hard to determine if new actions accord old with beliefs or challenge them. That determination is key to keeping oneself in check against hypocrisy.
As if these problems weren't enough, Howell also warns that while Google Morals can guide users to to proper action, they can end up doing what they are told in a robotic manner. For example, consider someone using Google Morals to determine whether to tell a bigot to stop making racist jokes. Google Morals says yes, and the user complies. That person could still lack the appropriate emotions (doesn't feel awful when hearing offensive jokes), might not be inclined to develop the appropriate virtues (as he or she isn't acquiring experience speaking out against injustice without prompting), and doesn't deserve credit for his or her actions (since the judgment call came from Google Morals and not a morally attuned character).
To bring out the full import of these cautions, let's consider another thought experiment. Imagine that an advanced version of Google Morals can dispense correct advice whenever asked and also further sense when the agent is in a moral situation and notify him or her accordingly. Once the technology becomes available, early adopter Johnny and Susie download the Google Morals 2.0 app into their smart phones, and whenever they are in a morally fraught situation, the phone beeps. (To avoid confusing the beep with a text message, both users assign it a special ring tone, maybe the sound of angelic harps playing.) As soon as they hear the beep, Johnny and Susie have to decide whether or not to ask Google Morals for advice. Given the ease by which they can get reliable on the spot ethical advice, Johnny and Susie are covered in every possible moral situation, and their autonomy remains in tact.
From Howell's perspective, Johnny and Susie haven't found the perfect technological enhancement because using the tool violates their moral agency. With their character entirely bypassed by a technological surrogate that makes all the decisions and holds all the beliefs, the price paid for a saintly simulation turns out to be loss of humanity.
Follow Evan Selinger on Twitter: www.twitter.com/evanselinger