Computers May Be Closer to Learning Common Sense Than We Think

Computers May Be Closer to Learning Common Sense Than We Think
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

What are some likely AI advancements in the next 5 to 10 years? originally appeared on Quora - the knowledge sharing network where compelling questions are answered by people with unique insights.

Answer by Yann LeCun, Director of AI Research at Facebook and Professor at NYU, on Quora:

There are a number of areas in which people are working hard and making promising advances:

  • Deep learning combined with reasoning and planning.
  • Deep model-based reinforcement learning (which involved unsupervised predictive learning).
  • Recurrent neural nets augmented with differentiable memory modules (e.g. Memory Networks):
  • Generative/predictive models trained with adversarial training.
  • "Differentiable programming": This is the idea of viewing a program (or a circuit) as a graph of differentiable modules that can be trained with backprop. This points towards the possibility of not just learning to recognize patterns (as with feed-forward neural nets) but to produce algorithms (with loops, recursion, subroutines, etc). There are a few papers on this from DeepMind, FAIR, and others, but it's rather preliminary at the moment.
  • Hierarchical planning and hierarchical reinforcement learning: This is the problem of learning decomposing a complex task into simpler subtasks. It seems like a requirement for intelligent systems.
  • Learning predictive models of the world in an unsupervised fashion (e.g. video prediction).

If significant progress is made along these directions in the next few years, we might see the emergence of considerably more intelligent AI agents for dialog systems, question-answering, adaptive robot control and planning, etc.

A big challenge is to devise unsupervised/predictive learning methods that would allow very large-scale neural nets to "learn how the world works" by watching videos, reading textbooks, etc, without requiring explicit human-annotated data.

This may eventually lead to machines that have learned enough about the world that we see them as having "common sense."

It may take 5 years, 10 years, 20 years, or more. We don't really know.

originally appeared on
- the knowledge sharing network where compelling questions are answered by people with unique insights. You can follow Quora on
,
, and
. More questions:

Popular in the Community

Close

What's Hot