What's Next In Machine Learning?

What's Next In Machine Learning?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

What are good research topics in machine learning in 2016? originally appeared on Quora - the knowledge sharing network where compelling questions are answered by people with unique insights.

Answer by Carlos Guestrin, Amazon Professor of Machine Learning in Computer Science & CEO of Dato, Inc., on Quora.

I strongly believe we've only scratched the surface of what's possible with machine learning... so there is no shortage of good research topics. Here are four I think are particularly important today (and I'm only working on two of them):

  • Explaining the predictions made by machine learning model: To be able to truly trust an ML model, we need to evaluate it quantitatively and gain a qualitative understanding of why they work. A breakthrough in this topic is going rapidly accelerate the rate of adoption of machine learning in the real world. My student Marco Ribeiro and postdoc Sameer Singh have written a really exciting paper on gaining intuitive explanations of why a particular prediction is made, and showed that even non-experts can improve the performance of ML models using these explanations.
  • Understanding why deep learning works: Deep neural networks have blown other methods out of the water, especially in problems related to vision and speech data. However, there is very little theoretical understanding as to why these methods perform so well. Deeper insights here will guide another decade of research.
  • Democratizing (scalable) machine learning: We need to learn ML models with ever increasing amounts of data. Research and industry around databases has been able to make that technology broadly accessible, even at scale. In ML, you still need a tremendous amount of expertise to learn good models from data, especially at scale. This needs to change for ML to have the same impact in the world that the databases community has had (and I believe we can have much, much more impact :)). Making even the most sophisticated ML techniques broadly accessible and applicable is the goal of Dato (my startup), and the focus of a lot of my research in recent years, including recent work of my student Tianqi Chen Tianqi Chen on XGBoost, which is used by more than half of the teams who win Kaggle competitions.
  • Representing common-sense knowledge: Machine learning methods have been most successful at representing lower-level information, such as what's used to detect objects in images or recognize speech with deep learning. However, that humans reason by exploiting their high-level understanding of the world, reasoning about objects and their relations, and building analogies. A step in this direction is the work done at the Allen Institute for Artificial Intelligence on learning to solve math and geometry problems.
This question originally appeared on Quora. - the knowledge sharing network where compelling questions are answered by people with unique insights. You can follow Quora on Twitter, Facebook, and Google+. More questions:

Popular in the Community

Close

What's Hot