Is Artificial Intelligence Overhyped in 2017?

Is Artificial Intelligence Overhyped in 2017?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Is AI over-hyped in 2017? originally appeared on Quora - the place to gain and share knowledge, empowering people to learn from others and better understand the world.

Answer by Joanne Chen, Partner at Foundation Capital, on Quora:

To quote Bill Gates “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Don't let yourself be lulled into inaction.”

In short, over the next ten years, I don’t believe AI will be overhyped. However, in 2017, will all of our jobs be automated away by bots? Unlikely. I believe the technology has incredible potential and will permeate across all aspects of our lives. But today, my sense is that many people don’t understand what the state of AI is, and thus contribute to hype.

So what can AI do today?

Artificial intelligence, a concept dating back to the 50s, is simply the notion that a machine can performance tasks that require human intelligence. But AI today is not what the science fiction movies portray it to be. What we can do today falls in the realm of narrow AI (vs general intelligence), which is the idea that machines can perform very specific tasks in a constrained environment. With narrow AI, there are a variety of techniques that you may have heard of. I’ll use examples to illustrate differences.

Let’s say you want to figure out my age (which is 31).

1) Functional programming: what we commonly know as programming, a way to tell a computer to do something in a deterministic fashion. I tell my computer that to compute my age, it needs to solve AGE = today’s date – birth date. Then I give it my birth date (Dec 4, 1985). There is 0% chance the computer will get my age wrong.

2) Machine learning: an application of AI where we give machines data and let them learn for themselves to probabilitically predict an outcome. The machine improves its ability to predict with experience and more relevant data. So take age for example. What if I had 1,000 data sets of people’s ages and song preferences? Song preference is highly correlated with generation. For example, Led Zeppelin and The Doors fans are mostly 40+ and Selena Gomez fans are generally younger than 25. Then I could ask the computer given that I love the Spice Girls and Backstreet Boys, how old does it think I am? The computer then looks at these correlations and compares it with a list of my favorite songs to predict my age within x% probability. This is a very simple example of using machine learning..

3) Deep Learning: is a type of machine learning emerged in the last few years, and talked widely about in the media when Google DeepMind’s AlphaGo program defeated South Korean Master Lee Se-dol in the board game Go.

Deep learning goes a step further than ML in that it enables the machine to learn purely by providing examples. In contrast, ML requires programmers to tell the computer what it should look for. As a result, deep learning functions much more like the human brain. This especially works well with applications like image recognition.

4) Deep reinforcement learning: DRL goes one step further and combines deep learning with reinforcement learning which is the notion of learning by trial-and-error, solely from rewards or punishments. DRL mimics how children learn – they see observe other people doing things, they try things out and depending on the reward, they either repeat them or not!

Machine learning technologies have become more available (and the reason why there has been increasing media hype around this space) has been driven by advancements in three areas:

1) Infrastructure to run ML algorithms – massive improvements in storage, processing capabilities (i.e. GPUs that speed up parallel processing), and accessibility for rapid innovation (cloud).

2) New available algorithms developed.

3) Data proliferation to train algorithms.

Between algorithms innovation and data availability, I believe data plays a more crucial role in advancements. If you look at the chart below, breakthroughs in AI have been quickly followed by availability of datasets, while many of the corresponding algorithms have been available for over a decade.

AI will permeate our lives in the next ten years. Think of the possible time, money, and manpower saved by automating simple processes. And as the technology becomes more advanced, the use cases will get even more exciting. I think it’s a wonderful time as an entrepreneur to be able to leverage this technology, and I couldn’t be more excited as an investor.

This question originally appeared on Quora - the place to gain and share knowledge, empowering people to learn from others and better understand the world. You can follow Quora on Twitter, Facebook, and Google+. More questions:

Popular in the Community

Close

What's Hot