Sprite Cherry, 'gaydar,' and murder. Why AI is not to be trusted -- yet

Sprite Cherry, 'gaydar,' and murder. Why AI is not to be trusted -- yet
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
Al (with a little help from LeBron James) informed Coca-Cola’s decision to launch Sprite Cherry

Al (with a little help from LeBron James) informed Coca-Cola’s decision to launch Sprite Cherry

Coca-Cola

You may have seen the news flying around this week about Coca-Cola and artificial intelligence. Coke, with a long history of tradition and secrecy was now taking orders from a bit of AI software. “AI told Coca-Cola to make Cherry Sprite. So it did,” declared Quartz. As headlines go, it certainly grabbed my attention, but like most AI gonfalons, it seems to encourage a growing, and unwarranted over-reliance on AIs — something that can arrive with disastrous consequences.

“Loss of control”

Three professors at the University of Edinburgh revisited the 2009 plane crash of Air France Flight 447 to examine the unanticipated consequences of automation which can lead to a “loss of control.” Just prior to impact and the loss of 228 passengers, the cockpit recorder captured one of the pilots saying “This can’t be true,” as what they were experiencing was contradicted by the technology in which they had placed their trust. Writing in the Harvard Business Review, the professors warned:

“Our research… examines how automation can limit pilots’ abilities to respond to such incidents, as becoming more dependent on technology can erode basic cognitive skills… commercial aviation reveals how automation may have unanticipated, catastrophic consequences that, while unlikely, can emerge in extreme conditions.”
Air France 447 crashed into the Atlantic Ocean on June 1, 2009 after its autopilot disconnected. The plane shown here is of the same type as that which crashed.

Air France 447 crashed into the Atlantic Ocean on June 1, 2009 after its autopilot disconnected. The plane shown here is of the same type as that which crashed.

Wikipedia

“Gaydar”

Meanwhile, two researchers at Stanford University claim to have developed “machine vision [that] can infer sexual orientation by analysing people’s faces,” as reported by the Economist. Wait, what? Faces are now indicators of someone’s sexuality? Really? Let’s just call this AI’s version of phrenology.

Phrenology was a pseudoscience used in the 19th century to justify assumptions about intelligence, character, among other traits.

Phrenology was a pseudoscience used in the 19th century to justify assumptions about intelligence, character, among other traits.

Wikipedia

We’ve begun to recognize that machine learning algorithms can incorporate the biases of their developers, and that for better or worse, they can also be trained by their users to exhibit certain qualities. While these shortcomings can be addressed, there’s the problem of accuracy. Outside of a lab setting, from a pool of 1,000 randomly selected males, the AI selected 100 of them as most likely to be gay, but only 47 them were — so it was wrong a majority of the time. Further, the bar needs to be very high when making conclusions about something so intimate and important.

There’s till much that we don’t understand about this particular AIs to find it trustworthy. In describing the Stanford research project as “gaydar,” the Verge cautioned,

“AI researchers can’t fully explain why their machines do the things they do. It’s a challenge that runs through the entire field, and is sometimes referred to as the “black box” problem. Because of the methods used to train AI, these programs can’t show their work in the same way normal software does, although researchers are working to amend this.”

A lack of transparency and developer biases are not the only reasons that we should go slow when allowing a particular AI to earn our trust. When it comes to AI software, there’s also the problem of flawed inputs creating flawed outputs.

The data problem

Earlier this summer, a San Francisco Superior Court judge followed the recommendation of a predictive algorithm and released Lamonte Mims, a 19 year-old arrested for a parole violation. Five days later, Mims allegedly robbed and murdered a 71 year-old photographer. An investigation by the District Attorney’s office revealed that the predictive software had worked as intended. A clerk had failed to properly enter data concerning Mims’ criminal record, information that would have caused the algorithm to recommend holding him in custody until trial.

Based in part on the recommendation of a predictive algorithm, Lamonte Mims was released from custody in San Francisco. Five days later he allegedly committed murder.

Based in part on the recommendation of a predictive algorithm, Lamonte Mims was released from custody in San Francisco. Five days later he allegedly committed murder.

SFPD/ABC7 Screenshot

There are many more scenarios where we’ll want to be careful not to prematurely rely on AI. Self-driving cars is one of them.

Consider a future where we trust these vehicles to be fully-autonomous and a parlor game of utilitarian ethics is inevitable. Let’s say you have two autonomous vehicles: one with a 91 year-old man dozing across the front seat, and the other with three kids in the backseat, a pregnant mother and perfect role-model father up front. The cars barrel towards one another on a single lane road with unprotected, 1000-foot drop-offs on either side. Which car will veer off and sacrifice its passengers? What will the AI instruct the cars to do? What if both cars are being “operated” by sleeping 91 year old men? Or, what if one of the vehicles contains a monarch? Regardless of the variables, these are bedeviling “loss of control” predicaments, not unlike that faced by the pilots of Air France 447.

You’re in the driver’s seat

The easy answer to the ethical quandary: AI will instruct both cars to halt. But to get there, we will have to trust AI and surrender control. We literally must trust some number of lines of software code with our lives.

Let’s not confuse an AI-supplied recommendation with a AI-executed decision. As the San Francisco Chronicle noted in its coverage of the Mims case, “Judges are not required to heed the algorithm’s advice.” Nor did AI “decide” that Coke would make Cherry Sprite. Both instances — one serious, one trivial — show that for many situations, we’re simply not there yet.

Additional reporting by Jeremy Yuan.

Popular in the Community

Close

What's Hot