How Google's Artificial Intelligence Killed Plato

If a self-taught computer can recognize cats, what else will it be able to recognize when computers reach WALL-E proportions.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

The big announcement in artificial intelligence at the end of June was that Google's computers taught themselves to recognize cats. I'm really hoping this won't lead to more animated gifs of cats in various stages of things wacky and madcap.

Google's been experimenting with artificial intelligence and the ability to recognize objects for some time. Their Google Goggles app was one early move to build a database for such feats. What makes this different from other attempts is that this is not a matter of labeling images and teaching a computer to identify them from those labels, known as supervised learning.

According to their report, "Building High-level Features Using Large Scale Unsupervised Learning," Google and Stanford connected 1,000 computers, focusing their attention on 10 million stills from YouTube videos. The catch? None of the computers were let in on what they were looking at. In fact, part of the goal was to see if Google, using concepts from neuroscience, could get a computer to mimic the human brain.

The focus of this work is to build high-level, class-specific feature detectors from unlabeled images. For instance, we would like to understand if it is possible to build a face detector from only unlabeled images. This approach is inspired by the neuroscientific conjecture that there exist highly class-specific neurons in the human brain, generally and informally known as "grandmother neurons." The extent of class-specificity of neurons in the brain is an area of active investigation, but current experimental evidence suggests the possibility that some neurons in the temporal cortex are highly selective for object categories such as faces or hands... and perhaps even specific people...

The outcome is a self-learning computer network -- Alan Turing would be proud. Sarah Connor, lookout!

It took over two millennia, but perhaps Google's results are also a response to the ancient Greek philosopher, Plato. At the risk of oversimplifying, one of Plato's great concerns was that of object recognition. The classic example is a dog, but for our purposes, I'll use a cat. Why do we humans recognize cats as cats? They have four legs, but so do dogs. They have tails, but so do dogs. They have fur, but so do dogs. They have teeth, but so do... well, you get the picture.

The question has been a staple of philosophy courses for centuries. The bigger we go on the description (legs, tails, fur) the less specific we get. The smaller we go, say to atoms, any distinction is completely erased. Despite this difficulty, humans manage to recognize cats.

Plato's suggestion was that there was a World of Forms, that is, an immaterial, unchanging world where the archetype for all created things exist. In philosophy, Plato's famous "Cave Allegory" (VIDEO) is the primary source for this explanation. In this higher-level of existence -- the actual world of reality, according to Plato -- we find the master "cat" form after which all cats are patterned. Cats here are merely shadows of the one real and true LOL cat in the sky.

According to Plato, the soul pre-exists its life on earth and after death (in the act of reincarnation) by inhabiting this world of forms. One's rebirth into the material world causes the soul to forget what it saw in the World of Forms, but daily experience eventually makes it so that we can remember. When we see a cat, the reason we know it is a cat and not a dog is because we are suddenly struck by the memory of the original cat form.

Plato's idea appealed to philosophers and theologians for centuries. The Jewish philosopher Philo experienced ecstatic frenzies over it. The New Testament book of Hebrews utilized Plato for discussing the true form of the Temple in Heaven. Christian Platonism dominated theological thought for centuries. Artists like Vermeer hoped to paint something that was an ideal representation of the Platonic form of beauty and writer-philosopher, Iris Murdoch, argued for Plato's Form of the Good as a basis for morality.

So Plato is, as one might say, kind of a big deal.

Presumably, Google's computer network is not the reincarnation of an old code-breaking machine from World War II, yet without the need for the World of Forms and without a soul, it discovered a cat. Maybe discussion of Plato's potential demise is already afoot, but my inner-geek cannot but speculate what this means for the future of artificial intelligence.

If a self-taught computer can recognize cats, what else will it be able to recognize when computers reach WALL-E proportions. And I know this a jump, but will computers need Isaac Asimov's "Three Laws of Robotics" (an arbitrary morality code that works as a Three Commandments for robots), or will they be able to self-recognize right and wrong?

Apparently, Google killed Plato. Will it also kill our most-popular reason for God too?

Popular in the Community

Close

What's Hot