Since my TED Talk in February I've had dozens of opportunities to present the story of how my son learned his first words. The story, of course, has a twist: it's based on a unique home video collection that my wife and I (as speech and cognitive scientists, respectively) decided to create to study his language development in the natural setting of our home.
People's response tends to follow a common path.
- Stunned, momentarily, when I explain how the study led us to wire our entire home with microphones and cameras ... to the tune of 200,000 hours of video and audio.
- Relieved to hear of our carefully designed privacy protections during the recording and analysis. In the end, this is a really big home video collection. And while ours has a scientific purpose and corresponding privacy safeguards, it wouldn't surprise me to see people in the future preferring more candid, naturalistic home videos of this sort (if not this scale!).
- Curious to know how on earth we could sift through data from three years of verbal and non-verbal interactions between my son and his caregivers (my wife, our nanny and me), find the ones that matter... and then make meaning of them.
- Patient about the technical answer: Data visualization that enables us to see patterns in the recordings and develop deep machine learning algorithms that 1) discover semantic connections within the video and audio and 2) trace the "birth" of words back to their social context (since my TED talk, my lab at MIT has demonstrated that the "wordscapes" described in my talk are highly predictive of the order in which my son learned words).
- And, finally, touched to listen to one of our first magical glimpses into this data: a "time-lapse audio" of my son's transition from saying "gaga" to "water." (I must say, it still gets me every time, too.)
I like to think that, in their response, people are connecting the dots between a singular study in a family's home and the broader meaning of unraveling the mysteries of child language development. Though it's still early in the project, we believe the emerging research methodologies, technologies and findings may have profound implications on challenges as diverse as "educating" cognitive robots to treating disorders that affect children's development of communication and related social skills.
Work on analyzing child language development will continue to be a long-term focus of my and my wife's academic work. But in the meantime, ideas originating with this research have pulled me down a surprisingly different path within the world of communications -- the convergence of television and social media. We are entering into a period of profound change in how the world communicates as social media comes crashing into the realm of mass media. Making sense of all this and perhaps playing a role in shaping it will occupy much of my attention in 2012 as cofounder and CEO of Bluefin Labs. The last part of my TED talk touched on this new direction.
Just as the study of my son's early language uses deep machine learning to trace the birth of a child's words in a natural setting back to the larger social contexts in which they're formed, Bluefin is creating similar learning systems to interpret the intended meaning of social media comments made "in the wild." Bluefin's technology analyzes and organizes the ocean of social media conversations about television content (from shows to events to commercials), then generates a massive dataset of links between TV and Social. We call this dataset the TV Genome.
Bluefin is using social TV analysis, and the TV Genome in particular, to solve a communications problem a century in the making. Radio and TV media technology distanced broadcasters from their audiences -- creating a one-way flow of content and interrupting the natural feedback loop between speaker and audience. While people all along were commenting, in living rooms and bars alike, on what they were watching, their response wasn't visible or readable. The balance shifted heavily toward programmers and away from audiences. A yawning feedback gap opened that researchers have tried for decades to fill with focus groups, dial tests and surveys.
Social media is helping companies like Bluefin close that gap, re-shifting the balance and re-connecting speaker and audience to a potentially revolutionary effect: with a complete feedback loop, mass media communicators can adapt more fully and quickly to the needs and interests of the people.
Some applications of social TV analysis technology are obvious. Armed with true measures of viewer engagement (as opposed to consumption), television networks and distributors are now able to "tune" their content more precisely to audiences -- increasing value for advertisers. For their part, marketers can now use cause-effect engagement measures to optimize their creative development and media spend: which TV shows have the most affinity with their target segments; which dayparts or networks drive the strongest response; and which ad in the end perform best and worst.
But the revolution will go well beyond marketing. Clearer links among statements, response and eventual action could help restore some faith in political communications. Until recently, electronic mass media had the same effect on politics as it's had on marketing communications -- distancing politicians from voters. The 2008 Obama campaign largely closed the candidate/voter media feedback loop with its use of social media to understand, engage and then mobilize voters. In 2012, we'll no doubt see campaign and news organizations advance these techniques, possibly by tracking voter response real-time against TV news coverage, candidate appearances, party events and, of course, advertising.
Some would suggest that such deep, immediate feedback only invites candidates to pander increasingly to voter interests. Fair enough. But I'd flip over the coin and argue that a complete candidate/voter feedback loop also dials up accountability, strengthening our ability to hold candidates to their words and using our own reactions to spur their ongoing actions.
The juxtaposition of child language research and social TV analytics may seem jarring. But in fact the same basic principles are at play: linking language to context; studying communication feedback loops; gathering (lots of) observational data in the wild; using data visualization to see patterns in the data; and developing machine learning to model, predict, and -- ultimately -- shed light on how people communicate.