As a long-time technology journalist, I'm a bit jaded and rarely get excited about a new product or technology, but when I saw a demonstration of aurasma, from Autonomy, I had to agree that this technology may change the way we look at real-world objects.
The technology allows you to point a smart phone or tablet device at a static photograph or a physical object and have the the software recognize the object and then immediately play a video, launch a game or create an interactive experience in real-time while you're pointing your device at the object.
Imagine, for example, a photograph on the front page of a newspaper. You point your phone at the photo and suddenly you see it turn into a video. It still appears in the same spot in the paper only now there is motion and sound. It could also work with a physical object like a building, a car or a new toaster. Point your phone or tablet towards the object and perhaps you'll see a video about it or, in the case of a product, a review or a segment about similar products.
I got some hands-on time with the technology and was impressed at how tolerant it is to variations such as angles and lighting conditions. One example they provided was a reproduction of Mona Lisa which came to life as I aimed an iPad 2 in its direction. But after that, I pulled up an image of the same piece of art on my Android phone which aurasma recognized even though the size, lighting and angle were different.
Of course this isn't the first smart phone application designed to recognize physical objects but unlike many, it doesn't require a bar code or QR code -- it recognizes the object itself.
A phone, according to Autonomy CEO and co-founder Mike Lynch, can recognize about a half million objects.
In order for a device to recognize an object, someone would have had to enter it into a database. If you aim it at a generic house, it might recognize it as a house but wouldn't know much about it. If someone had coded that house -- perhaps as an historic landmark -- it would then recognize it and be able to launch a video or game based on that particular building.
The technology is also able to superimpose images on top of real-word objects. You could point the phone towards a building and see a monster walk through the front door or a helicopter land on top, assuming of course that someone had written an application to do that.
"The goal here is more to take products or newspaper or newspapers and allow people to insert virtual informatory around those," said Lynch. It can also make products interactive. You could point the phone at a box of corn flakes, for example and "get someone showing you a recipe for cornflakes or nutritional information or if it's a kids cereal perhaps the characters on the front of the box come to life and start talking to you."
The technology, said Lynch, "is designed to recognize man made objects rather than things of nature." He said it's more difficult to recognize different types of dogs, for example.
The company plans to license the technology to application developers and is likely to announce partnerships with media companies and consumer product companies which will use it enhance their advertising. There are also some potential non-profit applications as well as applications in tourism, medicine and other fields.
Listen to Interview with Mike Lynch
You can read more and listen to my 12-minute podcast interview with Autonomy CEO Mike Lynch at my For the Record podcast and blog on CNET News.com.
Video Demonstration Provided by Autonomy
Follow Larry Magid on Twitter: www.twitter.com/larrymagid