A Maryland couple accused of "neglect" earlier this year by Maryland Protective Services for letting their children -- ages 10 and 6 -- walk home alone from a neighborhood park found themselves in the news again this week. This time, police officers took their children into custody after the parents had dropped them off at the park to play, unattended, for two hours. In Silver Spring, Maryland, where this family lives, leaving anyone under age 18 unsupervised constitutes neglect.
How is it that a society so protective of children offline has left them largely unsupervised online?
"Aided by the convenience and constant access provided by mobile devices, especially smartphones, 92% of teens report going online daily -- including 24% who say they go online 'almost constantly,'" reports a new study from Pew Research Center.
On this virtual playground, where kids are being unleashed at increasingly younger ages, their activities are pretty much unsupervised. Few parents know what goes on in Snapchat, Yik Yak or Kik. Granted, as in a real life, a lot of wonderful things happen when kids get together to "play"- they learn new skills and make new friends, for example. But dangers lurk online too --predators, cyberbullying, sexting, pornography, suicide risk and more. Albeit the risk of kids getting into big trouble like this is low (but so is the risk of being kidnapped on the way home from the park, by the way). But it's the little trouble online that's so worrisome: everything from a damaged reputation, exposure to inappropriate content, to the hurt feelings that result from seeing photos of the sleepover you weren't invited to.
Privacy vs. Security
Thanks to government-whistle blower Edward Snowden, Americans are now largely aware that phone conversations and Internet data are being monitored all the time, yet surprisingly, few find this information disturbing. According to the latest national survey by the Pew Research Center and The Washington Post, there are no indications that Snowden's revelations of government surveillance have altered fundamental public views about the tradeoff between investigating possible danger and protecting personal privacy. In other words, 62% of Americans are willing to let the federal government keep an eye out for possible threats, even it that intrudes on personal privacy.
So why aren't these same large numbers of Americans seeking similar "surveillance" of the online threats that affect our children? The answer, it turns out, is complex.
Monitoring vs. Spying
"Most parents don't want to spy on their kids and many who do lack the strong stomach, thick skin and discipline required to wade through the icky stuff kids say to get to the stuff that could cause permanent harm," says Bob Dillon of Artimys Language Technologies, a company that uses machine-learning techniques for monitoring a child's text-based messages on popular social media sites.
"In addition, there are a significant number of parents who simply believe that giving kids some privacy to make normal teen mistakes is a healthy part of growing up."
However, as Dillon pointed out, there are different levels of "surveillance" available, ranging from total spyware to "mild-monitoring" by machine. Artimys is a "mild-monitoring" solution, using "Machine Learning" (a branch of Artificial Intelligence), "Sentiment Analysis," and "Natural Language Processing" to distinguish between conversations that represent a threat to a child and those that do not.
"Mildly monitoring" the online conversations of kids to detect activity of genuine concern seems like a no-brainer to me. After all, we've been using Artificial Intelligence that allows computers to detect patterns in real-world data and applying it to new data in all kinds of important applications for years-- take Siri, for example. While adults expect this kind of technology to be at our service when we need directions to the nearest Starbucks, few are clamoring for this tech to be used for something we love at least as much as caffeine -- our kids.
What Parents Don't Know
According to Robert Reichmann, of VISR, "If parents truly recognized how vulnerable their children were online they'd change their minds." VISR, a new technology that works directly with many of the networks kids love and use, carefully analyzes their social interactions and alerts parents only when there is a problem they need to be concerned about. But like Dillon, Reichmann finds parents largely in the dark about the serious issues kids can face online and also uninformed about the solutions available to them.
So if parents aren't seeking out this technology, what about kids? I asked a classroom of young teens if they liked the idea of technology like VISR or Artimys keeping watch on their social interactions in order to alert parents only if and when there is a potential problem. Their response? Are you kidding me?!? NO WAY!
Kids Have an Expectation of Privacy
Research confirms that younger Americans are more likely than older age groups to prioritize protecting personal privacy over personal safety. I find this to be increasingly true for those under-18 that I work with in the classroom. After all, they've largely been allowed to roam free on the Internet, well away from the prying eyes of adults, from the moment they were first handed a cellphone. That's hard to take back.
So if parents aren't seeking out these technological solutions and kids don't want them, then what incentive do social media networks have to work with companies like Artimys or VISR? Well, the answer is simple. None. At the moment children's expectation of privacy far outweighs parents' demand for their safety and the marketplace is responding accordingly. Additionally, many social media apps (like Snapchat, for example) don't provide a public interface to developers (an "API" in technical terms) that would allow technology like VISR or Artimys to actually work with their app. So, in cases like this, VISR works directly with social media networks that are willing (an example of this is their new partnership with Kids Email who prioritizes child safety). The way Artimys bypasses this roadblock is by offering a messaging inbox that aggregates all of the child's social activity from sites like Facebook, Instagram, Snapchat, SMS/MMS and more into one place. In both of these examples, however, kids have to be willing, collaborative participants in this "mild-monitoring" scheme, or it doesn't work. This is easiest accomplished when it's offered to the child as a prerequisite to getting their first phone (i.e., You want a new phone? Great! It comes pre-loaded with this new safety technology).
We Have the Solutions, Just Not The Desire
Both Dillon and Reichmann are technologists working on these solutions because they each have kids and are both genuinely concerned about child online safety. As Reichmann pointed out, this is an exciting time for technology like his and there is a lot of opportunity for growth -- just not in the kid space because the demand is not there. "Technology like VISR can create a much safer world," says Reichmann, "but it cannot succeed in this space unless parents rally around us."
The Perfect Solution?
As Dillon shared his struggles trying to deliver his product to a receptive audience, I asked him to describe his "ideal scenario":
- Parents tell social networks, loudly and unequivocally, that child safety is a vital priority.
- Social networks offer open API's and/or work with technologists developing products designed to keep kids safe online.
- Everyone -- parents, social media networks, and safety-product developers -- agree on a protocol or standards surrounding child safety on social media.