What's Next In Computer Science

Academic research is more theoretical and long-term; industrial research is more applied and short-term. If you want to have impact in a few months' time, industry is the place for you. If you want to work on the deep problems and have a shot at very high impact, go to academia.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

These Questions originally appeared on Quora - the knowledge sharing network where compelling questions are answered by people with unique insights.

Answers by Pedro Domingos, Professor at the University of Washington and author of The Master Algorithm, on Quora.

A:Academic research is more theoretical and long-term; industrial research is more applied and short-term. If you want to have impact in a few months' time, industry is the place for you. If you want to work on the deep problems and have a shot at very high impact, go to academia. These days I often hear things like "Why do machine learning in academia when industry has way more resources and way more people working on the same problems?" I think this is a red herring. Researchers in industry are under constant pressure, explicit on implicit, to contribute to the company's bottom line, and that's understandable. But solving the deep problems is more important than ever, precisely because of how pervasive machine learning has become, and academia is the best place to do it.

What we've seen in the last several years is that many of the more applied people have moved from academia to industry. This is a good thing, because it's an important path by which the field's research results are transferred to the real world, but it's also had the unfortunate side effect that the balance within academia is now perhaps tilted too much toward theory and away from experimental science. So we need to train the next generation of experimental researchers to fill the gap!

...

A: Here's one prediction: as computers get better at natural language understanding, more and more programming will be done by non-programmers. This will increase by many orders of magnitude the number of people who can effectively be computer scientists, developing algorithms for a living, and the face of computer science will change radically as a result. Right now, unfortunately, only a certain kind of mind (logical, meticulous, etc.) can succeed in computer science. But in the future that will be less important, because AI will fill in the blanks, and anyone with an idea, small or large, will be able to turn it into a working system. If you think progress is fast now, imagine what it will be like when this happens.

...

A: No, provided we stick to a simple rule: don't create AIs with goals of their own. AIs can come up with their own subgoals, but only in service of the goals we set them, and within the constraints we specify. This is how all AIs work today, and as long as they keep doing so, they can be infinitely intelligent without being a threat to us. You don't stay awake at night worrying that your dog will attack you. Why would you worry about your robot, which was evolved to serve you even more finely than your dog?

Of course, human nature being what it is, sooner or later someone will try to create a self-seeking AI. To deal with that, we need what William Gibson called the "Turing police": good AIs that catch bad AIs in the same way that cops catch criminals. Bank robbers use highways to get away, but that's not a reason to not have highways. Same with AI.

There's a different kind of AI threat to humanity that's much more serious: the danger that AIs will cause damage because they're ignorant, lack common sense, or interpret our commands too literally - the "sorcerer's apprentice" problem. In fact, this already happens all the time, when someone who should be given credit is denied, a patient is misdiagnosed, an innocent person is flagged as a potential terrorist, etc. But the way to minimize these errors is to make computers more intelligent, not less. People worry that computers will get too smart and take over the world, but the real problem is that they're too stupid and they've already taken over the world.

These questions originally appeared on Quora. - the knowledge sharing network where compelling questions are answered by people with unique insights. You can follow Quora on Twitter, Facebook, and Google+. More questions:

Popular in the Community

Close

What's Hot