On Monday, Apple showed off the newest capabilities of Siri, its popular voice-activated smartphone assistant. Following an update, Siri will be able to search through the day's sports scores, read out movie times at the local theater, and find dinner reservations for you -- and all you have to do is speak to it, as you would to another human, in order for it to work.
Siri is, as I have written, the most important feature on Apple's most important product. This is not just true financially: With Siri, and its pervasive advertising campaign, Apple has staked its claim as the company most closely associated with natural language control of devices.
Apple, however, is not alone in marking its territory in the future of human-device interaction.
Think about what are arguably the three most elite consumer tech companies in the world: Apple, Google and Microsoft. All are clearly attempting to transform the traditional point-and-click experience; and yet, interestingly, these companies have conceived three radically different ideas about how we will interact with our devices in the coming years.
Specifically: Apple thinks that we will talk to them (Siri); Google thinks we will stare through them (Google Glasses); and Microsoft thinks we will wave our hands in front of them (Kinect).
I have highlighted what I consider each company's most distinguishing and inspiring product in the personal device sector; each one demonstrates its company's core idea about the future. Apple has Siri, a voice-based system that can retrieve information when you ask for it; Google will soon have Google Glasses, a sight-based system in which everything on a tiny screen in front of your eyeball can be controlled by simple eye movements; and Microsoft has Kinect, in which an operating system can be manipulated via body movement, simulating "touching" a screen but really moving your hands in the air, touching nothing.
In other words, each company chose one of the five human senses on which to build its future, betting that the sense it has chosen will be the dominant means by which we interact with our devices. Apple has chosen hearing (speech); Google has chosen sight; and Microsoft has chosen touch.
(The choices also align, vaguely, with visions from popular science fiction of decades ago. Apple and Siri -- the ability to talk to your device, and have it understand and respond to your requests -- is Knight Rider. Google and its Glasses -- with the ability to have data about the world you are looking at displayed in front of your eyes -- is Terminator. And Microsoft and Kinect -- the ability to virtually move applications with your movements -- is Minority Report. Perhaps engineers at Google, Apple and Microsoft spent their childhoods nose-deep in sci-fi literature and film).
But back to the senses, and Apple, Google, and Microsoft. We have all seen what Siri, the speech-based system, can and cannot currently do, and it is easy to predict where Siri is going. As my friend Andrew Ferguson, a computer scientist at Brown University, wrote me, Apple seems to be steering Siri toward the classic futurist dream of a personal assistant who simply understands and has a correspondent response for everything you say. Expect Apple to continue to build out Siri, to enhance its capabilities, and to make it available on all of its future products: Soon you will be able to talk to your smartphone, tablet, MacBook, television, and -- yes, Knight Rider fans -- automobile.
Google, meanwhile, laid out its vision for a sight-operated device in an enormously viral video this past April, and the Glass prototypes are being rigorously tested as we speak. Though the lens-less eyeglasses can also be controlled via touch and voice, the most intriguing and captivating operating method comes via sight. We know that, even in the earliest stages, one can operate the camera and post photos to Google+ simply by looking in the correct direction. Maps with turn-by-turn directions that flash before one's eyes are also being tested. The Terminator-style overlay of an operating system -- which can be controlled merely by blinking and staring in the right spot -- could be closer than we think.
Finally, the idea of Microsoft's Kinect, the touch-based system, has clearly touched a nerve. For now, it is only officially being used on televisions with the Xbox, where the motion-tracking system can be used to control applications like Netflix and ESPN by moving your hands in front of you as though you were touching the screen. Kinect, however, is far more useful than a reinvention of the TV remote: This technology could be on your next laptop; hacks of the Kinect sensor have shown the system being used to control robots, perform surgery, and a projection of your smartphone's screen. Future applications of the Kinect sensor could turn any surface into a touchscreen, and any smartphone or tablet into a device that could be manipulated without needing to see the screen.
Hearing, sight and touch. Apple, Google, and Microsoft, will depend on each, respectively, to power future generations of devices and entice future generations of consumers. Debating which is "right" or "wrong" is pointless -- all are likely to converge into single devices at some point. In the immediate future, however, these are the individual roles to be mindful of: Apple as Knight Rider, Google as Terminator, and Microsoft as Minority Report.
And our role? Simple: Sit back, relax and watch science-fiction transform into science-fact.
The Morning Email helps you start your workday with everything you need to know: breaking news, entertainment and a dash of fun. Learn more