Machine Learning Has Gone Mainstream Over the Past Year

03/06/2017 11:31 am ET Updated Mar 06, 2017
fandijki/Getty Images

What were the main advances in machine learning/artificial intelligence in 2016? originally appeared on Quora - the place to gain and share knowledge, empowering people to learn from others and better understand the world.

Answer by Xavier Amatriain, VP of Engineering at Quora, on Quora:

2016 may very well go down in history as the year of the machine learning hype. Everyone seems to be doing machine learning, and if they are not, they are thinking of buying a startup to claim they do.

Now, to be fair, there are reasons for much of that “hype.” Can you believe that it has been only a year since Google announced they were open-sourcing Tensor Flow? TF is already a very active project utilized for anything ranging from drug discovery to generating music. Google has not been the only company open sourcing their ML software, though; many have followed the lead. Microsoft open sourced CNTK, Baidu announced the release of PaddlePaddle, and Amazon just recently announced that they would back MXNet in their new AWS ML platform. Facebook, on the other hand, is supporting the development of not one, but two Deep Learning frameworks: Torch and Caffe. Google is also backing the highly successful Keras, so things are at least even between Facebook and Google on that front.

Besides the “hype” and the outpour of support from companies to machine learning open source projects, 2016 has also seen a lot of applications of machine learning that were almost unimaginable a few months back. I was particularly impressed by the quality of Wavenet’s audio generation. Having worked on similar problems in the past, I can appreciate those results. I would also highlight some of the recent results in lip reading, an excellent application of video recognition that is likely to be very useful (and maybe scary) shortly. I should also mention Google’s impressive advances in machine translation. It is amazing to see how much this area has improved in a year.

As a matter of fact, machine translation is not the only important advance we have seen in machine learning for language technologies this past year. I think it is fascinating to see some of the recent approaches to combine deep sequential networks with side information to produce richer language models. In “A Neural Knowledge Language Model,” Bengio’s team combines knowledge graphs with RNNs, and in “Contextual LSTM models for Large-scale NLP Tasks,” the Deepmind folks incorporate topics into the LSTM model. We have also seen a lot of impressive work in modeling attention and memory for language patterns. As an example, I would recommend “Ask Me Anything: Dynamic Memory Networks for NLP,” presented in this year’s ICML.

Also, I should at least mention a couple of things from NIPS 2016 in Barcelona. Unfortunately, I had to miss the conference the one time that it was happening in my hometown. I did follow from a distance, though. And from what I gathered, the two hottest topics were probably Generative Adversarial Networks (including the very popular tutorial by Ian Goodfellow) and the combination of probabilistic models with Deep Learning.

Let me also mention some of the advances in my main area of expertise: Recommender Systems. Of course, Deep Learning has also impacted this area. While I would still not recommend DL as the default approach to recommender systems, it is interesting to see how it is already being used in practice, and in large scale, by products like Youtube. That said, there has been engaging research in the area that is not related to Deep Learning. The best paper award in this year’s ACM Recsys went to “Local Item-Item Models For Top-N Recommendation,” an interesting extension to Sparse Linear Methods (i.e. SLIM) using an initial unsupervised clustering step. Also, “Field-aware Factorization Machines for CTR Prediction,” which describes the winning approach to the Criteo CTR Prediction Kaggle Challenge is a good reminder that Factorization Machines are still a useful tool to have in your ML toolkit.

I could probably go on for several other paragraphs just listing impactful advances in machine learning in the last 12 months. Note that I haven’t even listed any of the breakthroughs related to image recognition or deep reinforcement learning, or pronounced applications such as self-driving cars, chat bots, or game playing, which all saw tremendous advances in 2016. Not to mention all the controversy around how machine learning is having or could have adverse effects on society and the rise of discussions around algorithmic bias and fairness.

Before I am called out on this, I should also mention that most of these advances were probably published by Schmidhuber years ago. But, hey, he at least was featured in the New York Times this year!

This question originally appeared on Quora - the place to gain and share knowledge, empowering people to learn from others and better understand the world. You can follow Quora on Twitter, Facebook, and Google+. More questions:

This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
CONVERSATIONS