04/29/2014 05:38 pm ET | Updated Jun 29, 2014

Transcendence : An AI Researcher Enjoys Watching His Own Execution

Richard Shotwell/Invision/AP

As an Artificial Intelligence (AI) professor at Berkeley, it was with some discomfort that I watched the assassination of Will Caster (Johnny Depp), an AI professor at Berkeley, by a terrorist group terrified by the possibility of a "strong" AI system that far exceeds human levels of intelligence. By the end of the movie, we are convinced that the terrorist group did the right thing.

So, how seriously should we take the movie's premise -- that superhuman AI is a potential threat to humanity? And how plausible, from a scientific viewpoint, is the sequence of events leading to Caster's mind being reconstituted, in vastly magnified form, inside a quantum computer?

AI has a scientific goal -- to understand intelligence as a general property of systems -- and an engineering goal -- to build intelligent systems using this understanding. The creation of superhuman AI systems is one possible outcome, but far from the only one; after all, a biological super-race is one possible outcome of genomic research, but not the only one.

Nonetheless, superhuman AI is an attractive goal. All benefits of civilization flow from our intelligence, and superhuman AI could greatly magnify those benefits and help solve humanity's pressing problems, leading to a golden age of peace and plenty. Dr. Evelyn Caster's inspirational presentation to potential funders stresses this aspect of their work, and Will Caster's computational reincarnation tries to put it into practice.

Is superhuman AI a reachable goal? Quite possibly. To feel comfortable that it's not, you'd have to bet your future against the combined might of Google, Microsoft, Apple, IBM, and the world's military establishments. The drawback of superhuman AI, as the film illustrates, is its very superness. Such systems can very quickly improve their own capabilities, going far beyond human understanding and control. If their goals happen not to include human welfare (and making sure they do is, at present, an unsolved problem), then the future is no longer ours.

The sequence of events leading to Caster-AI involves a quantum computer, PINN, that is already exhibiting signs of true intelligence. That quantum computation, currently a theoretical possibility supported by promising small-scale experiments, could mature into a usable technology seems quite plausible to many computer scientists. Undoubtedly it will make a huge difference in what can be computed efficiently, but it has nothing to do with whether a machine can be conscious. Indeed, on the latter topic, neither AI nor philosophy has anything conclusive, or even suggestive, to contribute. If someone gave me a trillion dollars tomorrow to develop a conscious machine, I'd just have to give it back.

The movie's plot requires the possibility of not only conscious machines, but also uploading one's mind into a computer. Here one must truly suspend disbelief. The idea that a few electrodes could capture and transfer all the structure and activity of the brain's 100 billion neurons and 200 trillion synapses is a non-starter. The closest any serious proposal comes to reproducing a human brain in the computer -- so-called whole-brain emulation or WBE -- is the process of vitrifying a brain, slicing it into very thin slices, tracing the connections of billions of neurons through millions of slices, and recreating those connections in a computer program. This has been done for the 302 neurons of the tiny worm, C. elegans, but the simulated worm does not yet behave at all like the real one.

To focus on technical plausibility, however, misses the point of the film. AI researchers must, like nuclear physicists and genetic engineers before them, take seriously the possibility that their research might actually succeed and do their utmost to ensure that their work benefits rather than endangers their own species. The sooner we start, the better.