Imagine if you could record your life -- everything you said, everything you did, available in a perfect memory store at your fingertips, so you could go back and find memorable moments and relive them, or sift through traces of time and discover patterns in your own life that previously had gone undiscovered. Well that's exactly the journey that my family began five and a half years ago. This is my wife and collaborator, Rupal. And on this day, at this moment, we walked into the house with our first child, our beautiful baby boy. And we walked into a house with a very special home video recording system.
(Video) Man: Okay.
Deb Roy: This moment and thousands of other moments special for us were captured in our home because in every room in the house, if you looked up, you'd see a camera and a microphone, and if you looked down, you'd get this bird's-eye view of the room. Here's our living room, the baby bedroom, kitchen, dining room and the rest of the house. And all of these fed into a disc array that was designed for a continuous capture. So here we are flying through a day in our home as we move from sunlit morning through incandescent evening and, finally, lights out for the day. Over the course of three years, we recorded eight to 10 hours a day, amassing roughly a quarter-million hours of multi-track audio and video.
So you're looking at a piece of what is by far the largest home video collection ever made. (Laughter) And what this data represents for our family at a personal level, the impact has already been immense, and we're still learning its value. Countless moments of unsolicited natural moments, not posed moments, are captured there, and we're starting to learn how to discover them and find them.
But there's also a scientific reason that drove this project, which was to use this natural longitudinal data to understand the process of how a child learns language -- that child being my son. And so with many privacy provisions put in place to protect everyone who was recorded in the data, we made elements of the data available to my trusted research team at MIT so we could start teasing apart patterns in this massive data set, trying to understand the influence of social environments on language acquisition. So we're looking here at one of the first things we started to do. This is my wife and I cooking breakfast in the kitchen, and as we move through space and through time, a very everyday pattern of life in the kitchen.
In order to convert this opaque, 90,000 hours of video into something that we could start to see, we use motion analysis to pull out, as we move through space and through time, what we call space-time worms. And this has become part of our toolkit for being able to look and see where the activities are in the data, and with it, trace the pattern of, in particular, where my son moved throughout the home, so that we could focus our transcription efforts, all of the speech environment around my son -- all of the words that he heard from myself, my wife, our nanny, and over time, the words he began to produce. So with that technology and that data and the ability to, with machine assistance, transcribe speech, we've now transcribed well over seven million words of our home transcripts. And with that, let me take you now for a first tour into the data.
So you've all, I'm sure, seen time-lapse videos where a flower will blossom as you accelerate time. I'd like you to now experience the blossoming of a speech form. My son, soon after his first birthday, would say "gaga" to mean water. And over the course of the next half-year, he slowly learned to approximate the proper adult form, "water." So we're going to cruise through half a year in about 40 seconds. No video here, so you can focus on the sound, the acoustics, of a new kind of trajectory: gaga to water.
(Audio) Baby: Gagagagagaga Gaga gaga gaga guga guga guga wada gaga gaga guga gaga wader guga guga water water water water water water water water water.
DR: He sure nailed it, didn't he.
(Applause)
So he didn't just learn water. Over the course of the 24 months, the first two years that we really focused on, this is a map of every word he learned in chronological order. And because we have full transcripts, we've identified each of the 503 words that he learned to produce by his second birthday. He was an early talker. And so we started to analyze why. Why were certain words born before others? This is one of the first results that came out of our study a little over a year ago that really surprised us. The way to interpret this apparently simple graph is, on the vertical is an indication of how complex caregiver utterances are based on the length of utterances. And the [horizontal] axis is time.
And all of the data, we aligned based on the following idea: Every time my son would learn a word, we would trace back and look at all of the language he heard that contained that word. And we would plot the relative length of the utterances. And what we found was this curious phenomena, that caregiver speech would systematically dip to a minimum, making language as simple as possible, and then slowly ascend back up in complexity. And the amazing thing was that bounce, that dip, lined up almost precisely with when each word was born -- word after word, systematically. So it appears that all three primary caregivers -- myself, my wife and our nanny -- were systematically and, I would think, subconsciously restructuring our language to meet him at the birth of a word and bring him gently into more complex language. And the implications of this -- there are many, but one I just want to point out, is that there must be amazing feedback loops. Of course, my son is learning from his linguistic environment, but the environment is learning from him. That environment, people, are in these tight feedback loops and creating a kind of scaffolding that has not been noticed until now.
But that's looking at the speech context. What about the visual context? We're not looking at -- think of this as a dollhouse cutaway of our house. We've taken those circular fish-eye lens cameras, and we've done some optical correction, and then we can bring it into three-dimensional life. So welcome to my home. This is a moment, one moment captured across multiple cameras. The reason we did this is to create the ultimate memory machine, where you can go back and interactively fly around and then breathe video-life into this system. What I'm going to do is give you an accelerated view of 30 minutes, again, of just life in the living room. That's me and my son on the floor. And there's video analytics that are tracking our movements. My son is leaving red ink. I am leaving green ink. We're now on the couch, looking out through the window at cars passing by. And finally, my son playing in a walking toy by himself.
Now we freeze the action, 30 minutes, we turn time into the vertical axis, and we open up for a view of these interaction traces we've just left behind. And we see these amazing structures -- these little knots of two colors of thread we call "social hot spots." The spiral thread we call a "solo hot spot." And we think that these affect the way language is learned. What we'd like to do is start understanding the interaction between these patterns and the language that my son is exposed to to see if we can predict how the structure of when words are heard affects when they're learned -- so in other words, the relationship between words and what they're about in the world.
So here's how we're approaching this. In this video, again, my son is being traced out. He's leaving red ink behind. And there's our nanny by the door.
(Video) Nanny: You want water? (Baby: Aaaa.) Nanny: All right. (Baby: Aaaa.)
DR: She offers water, and off go the two worms over to the kitchen to get water. And what we've done is use the word "water" to tag that moment, that bit of activity. And now we take the power of data and take every time my son ever heard the word water and the context he saw it in, and we use it to penetrate through the video and find every activity trace that co-occurred with an instance of water. And what this data leaves in its wake is a landscape. We call these wordscapes. This is the wordscape for the word water, and you can see most of the action is in the kitchen. That's where those big peaks are over to the left. And just for contrast, we can do this with any word. We can take the word "bye" as in "good bye." And we're now zoomed in over the entrance to the house. And we look, and we find, as you would expect, a contrast in the landscape where the word "bye" occurs much more in a structured way. So we're using these structures to start predicting the order of language acquisition, and that's ongoing work now.
In my lab, which we're peering into now, at MIT -- this is at the media lab. This has become my favorite way of videographing just about any space. Three of the key people in this project, Philip DeCamp, Rony Kubat and Brandon Roy are pictured here. Philip has been a close collaborator on all the visualizations you're seeing. And Michael Fleischman was another Ph.D. student in my lab who worked with me on this home video analysis, and he made the following observation: that "just the way that we're analyzing how language connects to events which provide common ground for language, that same idea we can take out of your home, Deb, and we can apply it to the world of public media." And so our effort took an unexpected turn.
Think of mass media as providing common ground and you have the recipe for taking this idea to a whole new place. We've started analyzing television content using the same principles -- analyzing event structure of a TV signal -- episodes of shows, commercials, all of the components that make up the event structure. And we're now, with satellite dishes, pulling and analyzing a good part of all the TV being watched in the United States. And you don't have to now go and instrument living rooms with microphones to get people's conversations, you just tune into publicly available social media feeds.
So we're pulling in about three billion comments a month, and then the magic happens. You have the event structure, the common ground that the words are about, coming out of the television feeds; you've got the conversations that are about those topics; and through semantic analysis -- and this is actually real data you're looking at from our data processing -- each yellow line is showing a link being made between a comment in the wild and a piece of event structure coming out of the television signal. And the same idea now can be built up. And we get this wordscape, except now words are not assembled in my living room. Instead, the context, the common ground activities, are the content on television that's driving the conversations. And what we're seeing here, these skyscrapers now, are commentary that are linked to content on television. Same concept, but looking at communication dynamics in a very different sphere.
And so fundamentally, rather than, for example, measuring content based on how many people are watching, this gives us the basic data for looking at engagement properties of content. And just like we can look at feedback cycles and dynamics in a family, we can now open up the same concepts and look at much larger groups of people. This is a subset of data from our database -- just 50,000 out of several million -- and the social graph that connects them through publicly available sources. And if you put them on one plain, a second plain is where the content lives. So we have the programs and the sporting events and the commercials, and all of the link structures that tie them together make a content graph. And then the important third dimension. Each of the links that you're seeing rendered here is an actual connection made between something someone said and a piece of content. And there are, again, now tens of millions of these links that give us the connective tissue of social graphs and how they relate to content. And we can now start to probe the structure in interesting ways.
So if we, for example, trace the path of one piece of content that drives someone to comment on it, and then we follow where that comment goes, and then look at the entire social graph that becomes activated and then trace back to see the relationship between that social graph and content, a very interesting structure becomes visible. We call this a co-viewing clique, a virtual living room if you will. And there are fascinating dynamics at play. It's not one way. A piece of content, an event, causes someone to talk. They talk to other people. That drives tune-in behavior back into mass media, and you have these cycles that drive the overall behavior.
Another example -- very different -- another actual person in our database -- and we're finding at least hundreds, if not thousands, of these. We've given this person a name. This is a pro-amateur, or pro-am media critic who has this high fan-out rate. So a lot of people are following this person -- very influential -- and they have a propensity to talk about what's on TV. So this person is a key link in connecting mass media and social media together.
One last example from this data: Sometimes it's actually a piece of content that is special. So if we go and look at this piece of content, President Obama's State of the Union address from just a few weeks ago, and look at what we find in this same data set, at the same scale, the engagement properties of this piece of content are truly remarkable. A nation exploding in conversation in real time in response to what's on the broadcast. And of course, through all of these lines are flowing unstructured language. We can X-ray and get a real-time pulse of a nation, real-time sense of the social reactions in the different circuits in the social graph being activated by content.
So, to summarize, the idea is this: As our world becomes increasingly instrumented and we have the capabilities to collect and connect the dots between what people are saying and the context they're saying it in, what's emerging is an ability to see new social structures and dynamics that have previously not been seen. It's like building a microscope or telescope and revealing new structures about our own behavior around communication. And I think the implications here are profound, whether it's for science, for commerce, for government, or perhaps most of all, for us as individuals.
And so just to return to my son, when I was preparing this talk, he was looking over my shoulder, and I showed him the clips I was going to show to you today, and I asked him for permission -- granted. And then I went on to reflect, "Isn't it amazing, this entire database, all these recordings, I'm going to hand off to you and to your sister" -- who arrived two years later -- "and you guys are going to be able to go back and re-experience moments that you could never, with your biological memory, possibly remember the way you can now?" And he was quiet for a moment. And I thought, "What am I thinking? He's five years old. He's not going to understand this." And just as I was having that thought, he looked up at me and said, "So that when I grow up, I can show this to my kids?" And I thought, "Wow, this is powerful stuff."
So I want to leave you with one last memorable moment from our family. This is the first time our son took more than two steps at once -- captured on film. And I really want you to focus on something as I take you through. It's a cluttered environment; it's natural life. My mother's in the kitchen, cooking, and, of all places, in the hallway, I realize he's about to do it, about to take more than two steps. And so you hear me encouraging him, realizing what's happening, and then the magic happens. Listen very carefully. About three steps in, he realizes something magic is happening, and the most amazing feedback loop of all kicks in, and he takes a breath in, and he whispers "wow" and instinctively I echo back the same. And so let's fly back in time to that memorable moment.
(Video) DR: Hey. Come here. Can you do it? Oh, boy. Can you do it? Baby: Yeah. DR: Ma, he's walking.
(Laughter)
(Applause)
DR: Thank you.
(Applause)