THE BLOG
07/06/2016 12:22 pm ET Updated Jul 07, 2017

5 Myths About The Future Of Artificial Intelligence

The past decade has seen important advancements in computer science that enable software systems to compile and process new information to continually improve the way they function. This improved artificial intelligence is enabling computers to become an ever more powerful and valuable complement to human capabilities: improving medical diagnoses, weather prediction, supply-chain management, transportation, and even personal choices such as where to go on vacation or what styles of clothes to buy.

Although artificial intelligence has become commonplace -- most smartphones contain some version of AI, such as speech recognition -- the public still has a poor understanding of the technology. As a result, a diverse cast of critics, driven by fear of technology, opportunism, or ignorance, has jumped into the intellectual vacuum to warn policymakers that, sooner than we think, AI will produce a parade of horrible outcomes. Unfortunately, their voices have grown so loud that we are nearing a tipping point where their narratives may be accepted as truth, which would create a real risk that policymakers will decide to ratchet back the pace of progress.

With the White House convening a discussion later this week on the social and economic implications of artificial intelligence technologies, to be followed just days later by the 25th International Joint Conference on Artificial Intelligence, it is a good time to rebut these pervasive and pernicious myths.

Myth No. 1: AI will destroy most jobs.

Many now argue that AI will power a productivity explosion so great that it will destroy jobs faster than the economy can keep up, creating an unemployed underclass that will be dominated by an elite class of "machine owners." MIT professors Erik Brynjolfsson and Andrew McAfee write in their frequently cited book that workers are "losing the race against the machine, a fact reflected in today's employment statistics." These are not new predictions, and they are as wrong today as they have been in years past.

The apocalyptic views that AI will kill jobs suffer from two major errors. The first is that they vastly overestimate the capabilities of AI to replace humans. It is actually quite hard for technology, AI or otherwise, to eliminate jobs, as evidenced by the fact that U.S. productivity has been growing at a historically slow pace. And it is particularly hard to automate large numbers of jobs with AI, because virtually all AI is "narrow AI," designed to focus on doing one thing really well. So, in many occupations, the introduction of AI may not lead to job loss at all; it may instead increase output, quality, and innovation.

The second reason is that even if AI were more capable, there still would be ample job opportunities, because if jobs in one firm are reduced through higher productivity, then costs go down. These savings are recycled though lower prices or higher wages. This puts more money into the economy, and the money is then spent creating jobs in whatever industries supply the goods and services that people demand as their incomes go up. This is why, historically, there has been a negative relationship between productivity and unemployment rates.

Myth No. 2: AI will make us stupid.

Even beyond the unfounded fear that smart machines will take our jobs, some dystopians assert that AI will turn us into helpless automatons who are bound to become overly dependent on the machines and in so doing lose our own native skills -- so when the machines occasionally fail, we'll be ill-equipped to take back control. As author Nicholas Carr suggests, "Automation can take a toll on our work, our talents, and our lives."

To be sure, some skills may become less necessary as AI is able to handle routine tasks that humans used to do -- just as machines like the automobile made it unnecessary for most people to know how to ride a horse -- but it will open up new areas of skill. And the issue is not whether these systems won't make errors; it is whether on net they will make fewer errors than human-controlled activities. The answer is yes; they will make fewer errors -- otherwise they will not be used -- and that will be a boon to mankind.

Myth No. 3: AI will destroy our privacy.

If smart machines can crunch massive amounts of data, then surely they will destroy our privacy. Or so AI dystopians warn us. Reporter and author John Markoff writes, "This neo-Orwellian society presents a softer form of control. The internet offers unparalleled new freedoms while paradoxically extending control and surveillance far beyond what Orwell originally conceived. Every footstep and every utterance is now tracked and collected, if not by Big Brother then by a growing array of commercial 'Little Brothers'."

But there are several reasons why these opponents are wrong. First, while AI systems have the ability and even the need to collect and analyze more information, the threat to privacy is little greater than in non-AI systems, which already collect and analyze large amounts of information. Moreover, the rules that already govern data use and protect privacy today will cover data analyzed by AI, too.

In short, this is basically a policy question, not a technology question. If we don't want government agencies to collect certain data, then Congress can require that and courts enforce it. Whether agencies have or do not have machine-learning systems is irrelevant. In addition, many, if not most of the benefits of AI-enabled data analysis can be obtained without the need to risk disclosing personally identifiable information.

Myth No. 4: AI will enable bias and abuse.

Machine-learning systems are more complex than traditional software systems. It was relatively clear how the older rules-based expert systems made decisions. In contrast, machine-learning systems continuously adjust and improve based on experience. Some critics claim this level of complexity will result in "algorithmic bias" that promotes government and corporate abuse, whether unintentional or deliberate. For example, Cathy O'Neil, author of the forthcoming book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, describes how machine learning algorithms are likely to be racist and sexist. Critics suggest that organizations will hide behind their algorithms and use the algorithms' complexity as a cover to justify exploitation, discrimination, or other types of unethical or damaging behavior.

It is certainly true that AI systems, like any technology, can be used unethically or irresponsibly. But those who resist AI based on this concern fail to recognize a key point: Machine-learning systems are not independent from their developers or the organizations using them. If an organization wants to systematically discriminate against certain groups, it'. need AI to do so. Furthermore, if an algorithmic system produces unintended and potentially discriminatory outcomes, it's not because the technology itself is malicious; it's because it simply follows instructions set by human decision making or, more often, relies on real-world data sets that may reflect bias. Finally, in most cases these systems are less biased than human decision making, where subconscious or overt biases permeate every aspect of society.

Myth No. 5: Smart machines will take over and potentially exterminate the human race.

Some argue that machines will become super-intelligent and decide they are better off without humans. Nick Bostrom, who has been called "a philosopher of remarkable influence," writes that a world with advanced AI would produce "economic miracles and technological awesomeness, with nobody there to benefit," like "a Disneyland without children," because the AI would first kill us all. Elon Musk, Bill Gates and Stephen Hawking's have also expressed concerns about "killer robots."

It's a sad commentary that the public has become so technophobic that we are even taking these sci-fi claims seriously. The view that smart machines will kill us overstates the pace of technological progress, particularly because the processing power of silicon computer chips is slowing down and progress in AI outside of deep learning is relatively modest. Moreover, machines and the human mind are completely different systems, and even major advances in computing are highly unlikely to produce a machine with humanity's intellectual capacity, imagination, or adaptability. As MIT computer scientist Rodney Brooks puts it, "We generalize from performance to competence and grossly overestimate the capabilities of machines -- those of today and of the next few decades." Just as importantly, even if human-level intelligent machines could be built, which is unlikely, they will remain under the control of humans, because we would never build them unless they are largely safe, with the benefits outweighing the costs (just as we do with all technologies in the marketplace).