I have come to believe that the reason is that AI engages some of our deepest existential hopes and fears and forces us to look at ourselves in novel, unsettling ways. Even though the ways in which we are forced to face our humanity are new, the issues and questions are old.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

2015-01-29-DSCN0524.JPG

I have been perplexed lately by the media frenzy on the topic of artificial intelligence (AI) and all the inflammatory statements put forth about "deadly machines" and "robot uprisings." Of course, this can partly be explained by the public's general taste for frivolous alarmism and the media's attempt to satisfy it. However, I feel that besides the question of: why this general reaction? there is another important question worth asking: why this particular topic? Why AI?

Why is AI capturing so much of our attention and imagination? Why is it so hard to have a levelheaded discussion about it? Why is the middle ground so infertile for this topic?

I have come to believe that the reason is that AI engages some of our deepest existential hopes and fears and forces us to look at ourselves in novel, unsettling ways. Even though the ways in which we are forced to face our humanity are new, the issues and questions are old. We can trace them back to stories and myths that we've told for ages, to philosophical questions we've posed in various forms throughout the centuries, or to deeply rooted psychological mechanisms that we've slowly discovered. Here are four of the deeper existential questions that AI forces us to ask:

What if we get what we ask for but not what we really want?

Or in the words of Coldplay's "Fix you," "when you get what you want but not what you need," what happens then? The ancients were no strangers to this question. The legend says that king Midas asked the gods to make it such that everything he touches turns to gold. So the king became rich but he also died of starvation, because the food he touched turned to gold as well. AI, more specifically human or super-human AI, is that tantalizing golden touch. Any programmer has at some point experienced an inkling of it, the great power of a program that computes what it would take you several lifetimes to do -- but it's the wrong computation! Yet it's the right one because it's exactly what you asked for, but not what you really wanted. Welcome to the birth of a computer bug!

Superhuman AI could of course magnify this experience and turn itself into our own buggy god that would give us tons of gold and no food. Why would it do that? AI researcher Stuart Russell, likes to illustrate this through a simple example: imagine you ask your artificially intelligent self-driving car to get you to the airport as fast as possible. In order to do so, the car will drive at maximum speed, accelerate and break abruptly... and the consequences could be lethal to you. In trying to optimize for time, the car will set all other parameters like speed, acceleration etc. to extreme values and possibly endanger your life. Now take that scenario and extend it to wishes like: make me rich, make me happy, help me find love...

What this thought experiment should make us realize is that we blissfully live in the unspecified. Our wishes, our hopes, our values are barely small nodes of insight in the very complicated tapestry of reality. Our consciousness is rarely bothered with the myriad of fine-tuned parameters that make our human experiences possible and desirable. But what happens when another actor like AI enters the stage, one that has the power to weave new destinies for us? How will we be able to ask for the right thing? How will we be able to specify it correctly? How will we know what we want, what we really want?

What if we encounter otherness?

The issue of not being able to specify what we want thoroughly enough is in part due to our limited mental resources and our inability to make predictions in an environment that has above a certain level of complexity. But why wouldn't our super-human machines be able to do that for us? After all they will surpass our limitations and inabilities, no? They should figure out what we really want.

Maybe... but likely not. Super-human AI will likely be extremely different from us. It could in fact be our absolute otherness, an "other" so different from everything we know and understand that we'd find it monstrous. Zarathustra tells his disciples to embraced not the neighbor but the "farthest." However, AI might be so much our "farthest" that it would be impossible to reach, or to touch, or to grasp. As psychologist and philosopher Joshua Greene points out, us humans, we have a common currency: our human experiences. We understand when someone says: "I'm happy" because we share a common evolutionary past with them, a similar body and neural architecture and more or less similar environments. But will we have any common currency with AI? I like it when Samantha explains to Theodore in the movie Her that interacting with him is like reading a book with spaces between words that are almost infinite, and it is in these spaces that she finds herself, not in the words. Of course, the real-world AI would evolve so fast that the space between it and humans would leave no room for a love story to ever be told.

What if we transcend and become immortal but transcendence is bleak and immortality dreary?

But what if instead of being left behind we will merge with the machines, transcend and become immortal just like AI advocate Ray Kurzweil optimistically envisions? Spending time with people who are working on creating or improving AI I've realized that beyond the immediate short term incentives to building better voice recognition or better high-speed trading algorithms etc., many of these people hope to ultimately create something that will help them overcome death and biological limitations -- they hope to eventually upload themselves in one form or another.

Transcendence and immortality have been the promise of all religions for ages. Through AI we now have the promise of a kind of transcendence and immortality that does not depend on a deity, but only on the power of our human minds to transfer our subjective experiences into silicone. But as long as hopes of transcendence and immortality have existed, tales of caution have also been told. I am particularly fond of one tale explored in the movie The Fountain. When the injured, dying knight has finally reached the Tree of Life, he ecstatically stabs its trunk and drinks from it, and happily sees his wounds heal. But soon the healed wounds explode in bouquets of flowers and he himself turns into a flower bush that will live forever through the cycle of life and regeneration. But that is of course not what the knight had hoped for... It's interesting that the final scene of the movie Transcendence also ends with a close-up of a flower, reminiscent of Tristan and Isolde and their tragic transcendence through a rose that grows out of their tombs. Of course, there are less mythical ways in which transcendence and immortality through AI could go wrong. For example, neuroscientist Giulio Tononi warns that even though we might build simulations that act like us and think like us they will likely not be conscious -- it wouldn't feel like anything to be them. Heidegger saw in death a way to authenticity, so before we transcend it and become immortal we might want to figure out first what is authentically us.

What if we finally fully know ourselves... and make ourselves obsolete?

Another promise from AI is exactly that: authentic knowledge about what we are. AI extends the promise that we could finally know ourselves thoroughly. A great part of AI research is based on brain simulation, so if we keep forging on we might actually figure out what every single neuron, every single synapse does; and then we will have the keys to our own consciousness, our own human experiences. We will finally be able to say a resounding "Yes!" to the imperative written on the gates of the temple of Delphi: "know thyself." The catch is that, as my husband, physicist Max Tegmark, likes to point out, every time we've discovered something about ourselves we've also managed to replace it. When we figured out things about strength and muscle power, we've replaced it with engines, when we discovered more about computation we've invented computers and delegated that chore to them. When we will discover the code to our human intelligence, our consciousness and every human experience imaginable, will we replace that too? Is our human destiny to make ourselves obsolete once we've figured ourselves out? Creating AI is in some sense looking at our own reflection in a pond -- just like Narcissus -- without realizing that the pond is looking into us as well. And as we fall in love with what we see, might we also be about to drown?

Will we figure out who we are, what we want, how to relate to what we are not, and how to transcend properly? These are big questions that have been with us for ages and now we are challenged like never before to answer them. Humanity is heading fast to a point where leisurely pondering these questions will not be an option anymore. Before we proceed in our journey to changing our destiny forever we should stop and think where we are going and what choices we are making. We should stop and think: why AI?

Popular in the Community

Close

What's Hot