Future of AI at SciFoo 2015

Every year approximately 200 people meet at Google in Mountain View, California. No topic is considered too crazy or taboo, and half-baked thoughts and ideas are encouraged rather than rebuked. The outcome is a glorious mess of ideas and inspiration.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

2015-09-01-1441072553-4872815-DSC01066.JPG

Every year approximately 200 people meet at Google in Mountain View, California for an event called SciFoo, probably one of the most famous unconferences. Innovators from various disciplines are given access to Google's cafeterias, to rooms with funky names such as neuralyzer, flux and capacitor and are left to organize sessions where they discuss freely, present bold ideas, give demos of gadgets etc. No topic is considered too crazy or taboo, and half-baked thoughts and ideas are encouraged rather than rebuked. The outcome is a glorious mess of ideas and inspiration that one needs weeks to digest afterward.

One of the sessions at SciFoo this year, organized by Nick Bostrom, Gary Marcus, Jaan Tallin, Max Tegmark, and Murray Shanahan, discussed the future of artificial intelligence. Each of the organizers presented a 5-minute thought piece after which the floor was open for discussion. SciFoo operates under a "frieNDA" policy where people's comments can only be reported with their permission - I'm grateful to the five speakers for consenting.

Murray Shanahan began by delineating the distinction between on one hand specialist AI (being developed with certainty in the short term, on a time frame of 5-10 years), and general AI (with a long time horizon, the full development of which for now pertains to the domain of science fiction visions). Then Shanahan raised three question-ideas:

1. Do we want to build properly autonomous machines or do we want to ensure that they are just tools?
2. If we could create a powerful AI that could give us anything we wanted, what would we get it to do?
3. Should we create our own evolutionary successors?

While Murray Shanahan opened with philosophical idea-questions, taking as a given the development of general, strong AI, Gary Marcus adopted the position of the skeptic and focused on the issue of the imminence of strong AI. To the question of how soon will strong AI come, he expressed the opinion that there is still very little progress done on strong AI and that the focus is almost entirely concentrated on narrow AI.

Deep learning, the most promising avenue towards strong AI, is easily fooled, he felt, and doesn't conceive of the world as we do. He exemplified by referring to the T-shirt he was wearing the previous day imprinted with a wavy pattern and having the inscription "Don't worry killer robot, I am a starfish" - a mocking allusion to the fact that image recognition algorithms are still plagued by very basic mistakes, such as confusing wavy patterns with starfish. Therefore, at least 20 to 40 years to strong, general AI concluded Marcus. Even though concerned about strong AI, he didn't think it would come soon, mainly because we are still missing a solution to a crucial problem: how to instantiate common sense in a machine.

Nick Bostrom opened his remarks by stating that it is hard to tell how far we are from human level AI. However, an interesting question according to him was: what happens next? Very likely we will get an intelligence explosion. This means that things that are compatible with the laws of physics but are currently part of science fiction could happen. So what can we do to increase the chance of beneficial outcomes? Bostrom felt that responders to this question usually belong to two camps: those who believe that this is not a special problem, therefore no special effort is needed and we will just solve this as we go along, and those who believe there is no point in trying because we cannot control it anyway. Bostrom, however, wanted to point out that there could instead be a third way of thinking about this: what if this is a difficult but solvable problem? he asked.

Jaan Tallinn talked about his personal history of increasing concern regarding the development of AI, from his first encounter with the writings of Elizer Yudkowsky to his involvement and support of organizations that attempt to steer the development of AI towards beneficial outcomes. Max Tegmark introduced one of these organizations supported by Tallinn, the Future of Life Institute which has steered the effort behind the open letter signed by more than 6000 people, among which top AI researchers and developers, an open letter underlining the importance of building AI that is robust and beneficial to humanity. The letter and accompanying research priorities document received financial support from Elon Musk, which enabled a grant program for AI safety research.

The presentations were followed by a lively general discussion. Below are some of the questions from the public and the remarks of the panel.

Do you think we can achieve AI without it having a physical body and emotions?

The panel remarked that intelligence is a multifaceted thing and that artificial intelligence is already ahead of us in some ways. A better way of thinking about intelligence is that it simply means that you are really good at accomplishing your goals.

Since cognition is embodied, for example opportunities for acquiring and using language depend on motor control, calculations depend on hands, is it possible to separate software from hardware in terms of cognition?

Robots have bodies, sensors, so to the extent that that matters, it is not an obstacle, it is merely a challenge. Embodyment is not a necessary condition for cognition. The fact that machines don't have bodies won't save us.

What do we do with strong AI? Why is its fate ours to choose?

At the end of the day you have to be a consequentialist and ask yourself: why are you involved in a project that randomizes the world? What is the range of futures ahead of you? Also, this question has different answers depending on what kind of AI you imagine building: one that is dead inside but can do amazing things for us, or something that is conscious and able to suffer.

Isn't AI inevitable if we want to colonize the Universe?

Indeed when contemplating the kind of AI we want to develop, we have to think beyond the near future and the limits of our planet, we should also think about humanity's cosmic endowment.

In order to design a system that is more moral, how do you decide what is moral?

We should not underestimate the whole ecosystem of values that might be vastly different than any human's. We should also think not just about the initial set of moral values but also what we want to allow in terms of moral development.

We are already creating corporations that we feel have intentions and an independent existence. In fact we create many entities, social or technological that demonstrate volition, hostility, morality. So are we in a sense simply the microbiome of future AI (echoing another session at SciFoo that tackled the controversial question of whether we indeed have free will or are in large part controlled by our microbiome, our gut bacteria)?

The panel responded that one of the issues concerning us, the potential "microbiome" of future entities, is whether we are going to get a unipolar or a multipolar outcome (a single AI entity or a diverse ecosystem of AIs). The idea of the intelligence explosion coming out of a system that is able to improve itself seems to point towards a unipolar outcome. In the end it very much depends on the rate at which the AI will improve itself.
Another issue is the building of machines that not only do what we literally ask them to do but what we really want - the key to remaining a thriving microbiome. Some panelists felt this was a big challenge: could we really create AI that is not self-reflective? It seems like a lot would hinge upon aspects of the world that the AI could represent. Once an oracle machine (generally considered safe because this machine only answers questions like an oracle, it does not act upon the world) starts modeling the people who ask the questions its response would start covering manipulative answers as well. Indeed, in some sense our DNA has invented our brains to help reproduce itself better, but we found ways to circumvent that through birth-control for example (similarly we have found ways to hack our gut bacteria). So would our "microbiome-goals" be retained by entities smarter than us?
Finally another related question is what would the machines be able to learn. What kind of values and action schemas would be "innate" (pre-programmed) and what would the AI learn?

The session ended in a true SciFoo spirit with an honest recognition of our limited knowledge but also with a bold thought about the limitless possibilities for discovery and creativity:

Even in psychology we don't know what general intelligence really means so in modeling cognitive processes in a sense we can't even claim that we are either near or far from general AI.

To this thought from the public the panel remarked that even though the threshold of general or super intelligence might be deceiving in a sense, being fluid and ill defined, there is no issue in principle with creating general intelligence - after all our own brains are existence proof that you can have it.

Popular in the Community

Close

What's Hot