There was an interesting article on the AP this week entitled, "Israel blocks airborne protest, questions dozens." It described how Israeli security used social media sites to compile a "blacklist" of undesirable individuals and then prevented many of them from entering the country. It's a good read if you're interested in Palestinian/Israeli politics. I have no plans to discuss that here. What interests me is the ease with which they assembled the data, identified the people and literally stopped them at the border. Social listening posts have come a long way in a very short time and, although you may not agree with the politics, it's a very cool use of the technology. That said, is it "Big Brother-ish?" Yep. Is it data-driven police work? Yep. Is it legal? Should it be?
Then there's MORIS, the Mobile Offender and Identification System by BI2 Technologies, LLC. Sean Mullin, CEO of the Plymouth, MA firm says that his software can identify a person via facial recognition, iris recognition or fingerprint recognition using an iPhone App. Police nationwide are testing this technology. Is it "Big Brother-ish?" Yep. Is it data-driven police work? Yep. Is it legal? Should it be?
How does it work? SIFT (Scale-invariant feature transform) files have been around since 1999. SIFT is a very popular computer vision algorithm first published by David Lowe. Depending upon the quality of the data, a system that uses SIFT files for 2D and 3D object recognition can be very accurate. Clever programmers have written incredibly efficient, scalable Apps that use SIFT files to identify everything from logos to landmarks, and fingerprints to retina scans. In fact, if a digital camera can see it, there's a very good chance that a computer can be programmed to recognize the it.
So, once again, we find that our technology is way ahead of our laws, and even our strategies for the doing of life. If my iPhone can identify people with criminal records, shouldn't it do so before I tag and friend them on Facebook? If someone is a registered sex offender (and they are in the photo of my kid's birthday party that I just posted on Flickr) shouldn't the system tell me?
Of course, recognition software by itself doesn't do anything. You need a database of images and metadata (the data that describes the images) to compare to the image you want recognized. Who is the keeper of that database? Who has pictures of bad guys? Who decides who the bad guys are? How is that database going to be protected? If you accidentally end up in the bad guy database, how do you get out?
I don't have good answers for any of those questions. But they need to be asked.
In a world where identity theft is a front and center issue, I am having fun imagining the kind of insanity that will ensue as hackers add "good" people to the "bad" people database. And, it doesn't take much to imagine several nightmare scenarios for people caught on the wrong side of this technology.
As we transition from the industrial age to the information age, we are going to be faced with these kinds of choices quite often. In fact, since the rate of technological advancement is accelerating exponentially, these legal, moral and societal issues will come faster and faster, and we will just have to learn to deal with them.
There are legitimate concerns about privacy, illegal search and seizure, and the constitutionality of the application of some of these advanced search tactics. There is also a valid argument that says, "Technology is inherently neither good, nor bad." You've heard it put a different way, "guns don't kill people, people kill people." Ahh... if life were only that simple.
What to do? Well, I know that most of our elected officials are busy arguing about the debt ceiling, but jot your Senator or Congressman a note or email or, just tag a picture of them on your Facebook profile and put the message in the caption or on your wall... they're sure to see it there.
Follow Shelly Palmer on Twitter: www.twitter.com/@shellypalmer