Jared Lee Loughner may remain silent in federal custody, but we've already heard what he's had to say.
The messages he's left on YouTube and MySpace speak volumes about what kind of person is capable of a shooting spree that kills six and leaves a Congresswoman in critical condition.
Within hours of the incident, a profile emerged from his online activities of a deeply troubled young man resentful of government and, judging from the incoherence of his expressed thoughts, quite possibly suffering from an undiagnosed mental illness. Sites like Facebook and Twitter function as a public diary, with each recorded status update or homemade video offering one insight after another into its users' psyches.
It seems like so many of the people who explode into mass violence leave behind a trail of clues on social networks of the character traits that lead them to ruin. So why don't we do more to detect these patterns before it's too late? There's a number of ways in which social networks can do more to get its more troubled members the treatment they often make plainly obvious they desperately need.
Warped innermost thoughts are on exhibition to hundreds if not millions of others, depending on the site in question. Each utterance is exhaustively documented in a manner that can allow an internet user to comb an entire history that reveals distinct patterns of a personality with the click of a mouse.
The evidence that come from social networks is not unlike those inevitable press interviews with all the people who knew the alleged perpetrator in the days after a violent outbreak. Listening to these people recount all the many aberrant behaviors that practically prophesied the violence to come, an obvious question hangs in the air: If it was so clear in retrospect that this person was headed down the wrong path, why wasn't more done to get him or her professional help?
It's not as if Loughner was an aberration. He's just the latest psychopath whose activity on social networks unwittingly leaves a self-portrait for all the world to see. Consider George Sodini, a Pittsburgh analyst who poured out his troubled soul in YouTube videos and his own blog before killing four people when he opened fire inside a health club last year. Or 15-year-old Alyssa Bustamante, who stabbed to death a 9-year-old after listing "killing people" as a hobby on her YouTube profile, among other dysfunctional thoughts she shared on MySpace and Facebook.
These people practically waved a sign to the world stating their murderous intentions, and no one reached out to them. The point wouldn't be to apprehend them; you can't convict someone for crimes they had yet to commit.
This is not about fighting crime; it's about mental health. A person exhibits telltale signs of a mental disorder that can be quickly matched with an appropriate response. Sometimes it could be a perfectly voluntary opportunity to talk to someone about their problems. In the event that they make direct threats, police can be notified.
Given the vast sea of hundreds of millions of profiles that make up a global social network, it might seem unrealistic to expect every cry for help to be heeded. But let's not forget that sites like Facebook are highly sophisticated networks that offer advertisers the ability to target individuals with marketing messages relevant to what they say and do online. Why can't we do the same for dysfunctional people?
You could argue it's a lot easier to put a diaper ad in front of a mother posting her thoughts about raising a newborn. But there's been technology developed to detect emotion on social networks. Earlier this year, Israeli researchers at a Web Intelligence conference presented Pedesis, software capable of analyzing text that revealed depression levels in an author.
In another academic breakthrough, a University of Geneva professor working with MIT"s Mind Machine Project disclosed a software project of their own that could scan mass amounts of text or voice data to weed out individuals exhibiting character traits suggestive of someone engaged in terrorist activities.
This all might seem vaguely Orwellian, but in the context of social networks, is it really? Privacy is a pretty relative expectation on a site like YouTube, where all the world truly is a stage.
On some level, a person who shares their anguish via social media clearly wants help. Otherwise they wouldn't be broadcasting their woes. Just think of Abraham Biggs, a 19-year-old who committed suicide on live-video platform Justin.tv in 2008. Not only did onlookers ignore what was an obvious cry for help, some actually egged him on to follow through on his threat. Last year, actress Demi Moore was able to thwart a suicide attempt by notifying authorities to a message she received from one of her many Twitter followers.
Whether it's suicide or homicide, social media can be an environment where quick intervention can be the difference between life or death. And it doesn't have to come down to some kind of internet-based algorithm to detect someone in trouble; the human beings who populate these social networks can just as easily flag disturbing behaviors. Perhaps it's time the digital equivalent of the blue-light phones found on many college campuses be installed on Facebook et al.
If there could be said to be any upside to a profound tragedy like the Arizona massacre, it's the incident's ability to emphasize the importance of mental-health services. But in this day and age those services need to be more accessible in the forum where dysfunctional behavior actually occurs.
More from Andrew Wallenstein:
Could Ye Olde DVD Actually Help Digital Distribution