Imagine if you could record your life -- everything you said, everything you did, available in a perfect memory store at your fingertips, so you could go back and find memorable moments and relive them, or sift through traces of time and discover patterns in your own life that previously had gone undiscovered. Well that's exactly the journey that my family began five and a half years ago. This is my wife and collaborator, Rupal. And on this day, at this moment, we walked into the house with our first child, our beautiful baby boy. And we walked into a house with a very special home video recording system.
想一想,如果你能記錄下你的生命- 你的一言、你的一行 指頭點按一下,即可從完美的記憶體取得 那麽你便能回顧 重溫值得回憶的時刻 或是在過去的時光中篩選 並發現自己的生命中 先前沒有注意到的模式 那就是我的家庭 在過去五年半以來 經過的歷程 這是我太太也是同事Rupal 就是在那一天的那一刻 我們帶著第一個孩子 漂亮的兒子進入家門 我們走進的房子是一棟 裝配了非常特別的錄影系統的房子
(Video) Man: Okay.
(影片) 男聲:好了
Deb Roy: This moment and thousands of other moments special for us were captured in our home because in every room in the house, if you looked up, you'd see a camera and a microphone, and if you looked down, you'd get this bird's-eye view of the room. Here's our living room, the baby bedroom, kitchen, dining room and the rest of the house. And all of these fed into a disc array that was designed for a continuous capture. So here we are flying through a day in our home as we move from sunlit morning through incandescent evening and, finally, lights out for the day. Over the course of three years, we recorded eight to 10 hours a day, amassing roughly a quarter-million hours of multi-track audio and video.
Deb Roy:這個時刻以及 其他無數個對我們而言是特別的時刻 都在我們家被記錄下來了 因爲在房裡的每個房間 如果往上看就會看到攝影機和麥克風 如果往下看 就能鳥瞰這房間 這是我們的客廳 這是嬰兒房 廚房、飯廳 還有房子的其他地方 這些都儲存到一組設計來 攫取連續影音的硬碟中 現在快速看一下我們家的一天 從太陽升起的早晨 到點亮燈火的夜晚 到最後熄燈就寢 這整整三年裡 我們每天紀錄 8到10個小時 積聚了大約25萬小時的影音
So you're looking at a piece of what is by far the largest home video collection ever made. (Laughter) And what this data represents for our family at a personal level, the impact has already been immense, and we're still learning its value. Countless moments of unsolicited natural moments, not posed moments, are captured there, and we're starting to learn how to discover them and find them.
大家現在看到的是有史以來 最長的家庭影集的一小部分 (笑聲) 這些資料所代表的 對我的家庭而言,在個人的層面上 已有巨大的影響 我們仍須進一步了解其中的意義 有無數的時刻 無預期、不造作的自然時刻 都被記錄了起來 我們還在學習如何發現找到那些時刻
But there's also a scientific reason that drove this project, which was to use this natural longitudinal data to understand the process of how a child learns language -- that child being my son. And so with many privacy provisions put in place to protect everyone who was recorded in the data, we made elements of the data available to my trusted research team at MIT so we could start teasing apart patterns in this massive data set, trying to understand the influence of social environments on language acquisition. So we're looking here at one of the first things we started to do. This is my wife and I cooking breakfast in the kitchen, and as we move through space and through time, a very everyday pattern of life in the kitchen.
不過這個專案也有其科學上的目的 爲的是要利用這個縱向紀錄的資料 來了解一個小孩 如何學習語言- 那個小孩就是我的兒子 在設置隱私保護的條件下 讓每個在影片中的人都受到保護 我們讓我信任的MIT 研究團隊取用這些資料 於是我們可以開始從這巨大的 資料集裡拆解出其中的模式 試圖了解社交環境 對語言習得有哪些影響 我們看到這裡 這是我們首先進行的 這是太太和我在廚房做早餐 當我們在時空中移動 會出現一條在廚房裡的日常生活軌跡
In order to convert this opaque, 90,000 hours of video into something that we could start to see, we use motion analysis to pull out, as we move through space and through time, what we call space-time worms. And this has become part of our toolkit for being able to look and see where the activities are in the data, and with it, trace the pattern of, in particular, where my son moved throughout the home, so that we could focus our transcription efforts, all of the speech environment around my son -- all of the words that he heard from myself, my wife, our nanny, and over time, the words he began to produce. So with that technology and that data and the ability to, with machine assistance, transcribe speech, we've now transcribed well over seven million words of our home transcripts. And with that, let me take you now for a first tour into the data.
爲了轉換 這九萬小時的影像 成爲能辨識的東西 我們利用動作分析 汲取我們的移動軌跡 我們稱之爲「時空蟲」 這是我們的工具之一 用來查看 資料中所發生的活動 再利用此法去追蹤,特別是 我兒子在家中活動的軌跡 好讓我們能聚焦在解讀 有關我兒子學說話的環境- 所有他聽到的我、我太太和保姆說的字詞 然後經過長時間他開始說那樣的字詞 利用那樣的科技和資料 再利用機器輔助 便能轉譯他說出的言語 我們現在已經轉譯完成 足足超過700萬個家中言談的字詞 我們利用這樣的轉譯來 瀏覽一下這些資料
So you've all, I'm sure, seen time-lapse videos where a flower will blossom as you accelerate time. I'd like you to now experience the blossoming of a speech form. My son, soon after his first birthday, would say "gaga" to mean water. And over the course of the next half-year, he slowly learned to approximate the proper adult form, "water." So we're going to cruise through half a year in about 40 seconds. No video here, so you can focus on the sound, the acoustics, of a new kind of trajectory: gaga to water.
我相信各位一定 都看過時間推移影片 加快時間推移就可以看到花朵綻放 我現在想讓各位體驗一下 言語的花朵是怎麽綻放的 我兒子過第一個生日後不久 會說「gaga」來表示「水」 再接下來的半年時間裡 他慢慢學會說出接近 成年人說的正確的「water」 我們現在要用40秒 快步瀏覽半年的歷程 這裡不播放影片 這樣各位便能聚焦在聲音的 這種新的軌跡變化 從gaga到水
(Audio) Baby: Gagagagagaga Gaga gaga gaga guga guga guga wada gaga gaga guga gaga wader guga guga water water water water water water water water water.
(語音) 嬰兒:Gagagagagaga Gaga gaga gaga guga guga guga wada gaga gaga guga gaga wader guga guga water water water water water water water water water
DR: He sure nailed it, didn't he.
DR: 他的確說中了吧
(Applause)
(掌聲)
So he didn't just learn water. Over the course of the 24 months, the first two years that we really focused on, this is a map of every word he learned in chronological order. And because we have full transcripts, we've identified each of the 503 words that he learned to produce by his second birthday. He was an early talker. And so we started to analyze why. Why were certain words born before others? This is one of the first results that came out of our study a little over a year ago that really surprised us. The way to interpret this apparently simple graph is, on the vertical is an indication of how complex caregiver utterances are based on the length of utterances. And the [horizontal] axis is time.
他不僅學到「水」這個字詞 在24個月的歷程裡 頭兩年我們專注在這事上 這裡有張圖列出他學到的所有字詞的時序 由於我們有完整的轉譯 我們辨識出他在第二個生日前學到的 這503個字詞的每一個 他算是早說話的 所以我們開始分析原因 爲什麽有些字詞來得早? 這是最早發現的結果之一 大約一年前的研究結果 真的很讓我們吃驚 這張圖看來簡單,但解讀起來 在垂直線上有一顯示 從言語長度來看,照顧者的話語 是很複雜的 垂直軸線表示時間
And all of the data, we aligned based on the following idea: Every time my son would learn a word, we would trace back and look at all of the language he heard that contained that word. And we would plot the relative length of the utterances. And what we found was this curious phenomena, that caregiver speech would systematically dip to a minimum, making language as simple as possible, and then slowly ascend back up in complexity. And the amazing thing was that bounce, that dip, lined up almost precisely with when each word was born -- word after word, systematically. So it appears that all three primary caregivers -- myself, my wife and our nanny -- were systematically and, I would think, subconsciously restructuring our language to meet him at the birth of a word and bring him gently into more complex language. And the implications of this -- there are many, but one I just want to point out, is that there must be amazing feedback loops. Of course, my son is learning from his linguistic environment, but the environment is learning from him. That environment, people, are in these tight feedback loops and creating a kind of scaffolding that has not been noticed until now.
我們將所有的資料 根據下述的想法排列: 每當發現我兒子就要學會一個字詞 我們就回溯查看他先前聽到出現 那個字詞的所有言語裡 我們就繪製出那個比較長的言語 結果我們發現這麽一個奇特的現象 照顧者都會有系統地把字詞減降到最少 讓語言儘量變得簡單 然後又逐漸升回到複雜 令人訝異的是 那個升、那個降 幾乎與每個字詞 誕生的時間恰恰吻合- 一詞接一詞,很有系統 因此看來所有三個主要的照顧者- 我自己、太太和我們的保姆- 都有系統地,而且我認為是下意識地 重新建構我們的語言 好迎接一個字詞的誕生 然後讓我兒子慢慢學習更複雜的語言 這之中蘊含了許多意義 但我現在想指出的一點是 這過程中必有令人驚異的反應循環 當然,我的兒子正在 從他的語言環境中學習 但他所處的環境也會跟著有調整 環境和人也都在這緊密的反應循環裡 彼此建立起某種 以往沒有注意到的橋梁
But that's looking at the speech context. What about the visual context? We're not looking at -- think of this as a dollhouse cutaway of our house. We've taken those circular fish-eye lens cameras, and we've done some optical correction, and then we can bring it into three-dimensional life. So welcome to my home. This is a moment, one moment captured across multiple cameras. The reason we did this is to create the ultimate memory machine, where you can go back and interactively fly around and then breathe video-life into this system. What I'm going to do is give you an accelerated view of 30 minutes, again, of just life in the living room. That's me and my son on the floor. And there's video analytics that are tracking our movements. My son is leaving red ink. I am leaving green ink. We're now on the couch, looking out through the window at cars passing by. And finally, my son playing in a walking toy by himself.
不過,那是從言語情境來看 若從視覺情境來看又是如何呢? 現在看到的還不是- 想像這是我家剪下來的玩具屋 我們使用環狀連結的魚眼攝影機 然後再做一些光學修正 因此我們可以做成3D影像 歡迎來到我家 這是其中一個時刻 透過多重攝影機攫取的一個時刻 這樣做是要創造出最高端的記憶機器 可在其中以互動方式前後快速地搜尋 從而爲此系統的影像注入生息 我現在要讓各位 看的是壓縮30分鐘的加快影像 這次也只在客廳裡 那是我和我兒子在地板上 加上了影片分析 追蹤我們的動作 我兒子留下紅色軌迹,我留下綠色軌迹 我們現在在沙發上 看著窗外駛過的汽車 最後我兒子獨自玩會動的玩具
Now we freeze the action, 30 minutes, we turn time into the vertical axis, and we open up for a view of these interaction traces we've just left behind. And we see these amazing structures -- these little knots of two colors of thread we call "social hot spots." The spiral thread we call a "solo hot spot." And we think that these affect the way language is learned. What we'd like to do is start understanding the interaction between these patterns and the language that my son is exposed to to see if we can predict how the structure of when words are heard affects when they're learned -- so in other words, the relationship between words and what they're about in the world.
我們在此停格,這段有30分鐘 我們把時間放到垂直軸上 然後從中來看一下 剛才留下來的互動軌迹 我們看到這個令人訝異的結構- 這些兩種顔色的小結點 我們稱之爲社交熱點 這個蜿蜒交纏的點串 我們稱之爲單一熱點 我們認爲這些熱點對語言學習有影響 我們想要做的是 開始理解 這些模式與我兒子接觸的 語言之間的互動關係 看看是否能夠預測 聽到字詞時的結構 如何影響到字詞的學習- 換句話說就是 字詞與現實世界之間的關係
So here's how we're approaching this. In this video, again, my son is being traced out. He's leaving red ink behind. And there's our nanny by the door.
這就是我們的解讀方法 在這個影片裡 同樣是追蹤我的兒子 他留下紅色的軌迹 門旁的是我們的保姆
(Video) Nanny: You want water? (Baby: Aaaa.) Nanny: All right. (Baby: Aaaa.)
(影片) 保姆:你要喝水?(嬰孩:Aaaa.) 保姆:好的 (嬰孩:Aaaa.)
DR: She offers water, and off go the two worms over to the kitchen to get water. And what we've done is use the word "water" to tag that moment, that bit of activity. And now we take the power of data and take every time my son ever heard the word water and the context he saw it in, and we use it to penetrate through the video and find every activity trace that co-occurred with an instance of water. And what this data leaves in its wake is a landscape. We call these wordscapes. This is the wordscape for the word water, and you can see most of the action is in the kitchen. That's where those big peaks are over to the left. And just for contrast, we can do this with any word. We can take the word "bye" as in "good bye." And we're now zoomed in over the entrance to the house. And we look, and we find, as you would expect, a contrast in the landscape where the word "bye" occurs much more in a structured way. So we're using these structures to start predicting the order of language acquisition, and that's ongoing work now.
DR:她問他要不要水 然後兩條時空蟲 開始蠕動到廚房拿水 我們用來標示那個時刻 那個活動的字詞是「water」 我們現在有龐大的資料 從中汲取我兒子何時聽到 「水」這個字以及 他看到水的情境 我們利用來這些透析整個影片 找出每一個與水相關 同時發生的活動的軌迹 這些資料留下了 一幅風景 我們稱之爲「字詞風景」 這就是water的「字詞風景」 各位可以看到大多在廚房發生 即是在左邊那個大尖峰所表示的 做個比較,也可以爲別的字詞做個風景 比方說「good bye」裡的 這個「bye」 我們現在放大房子入口部分 我們查找也發現,各位看得出來 可以作爲對比的風景 在那兒「bye」的頻率建構出清楚的風景 所以我們利用這種風景結構 開始進行預測 語言習得的先後順序 那就是在我們目前的工作
In my lab, which we're peering into now, at MIT -- this is at the media lab. This has become my favorite way of videographing just about any space. Three of the key people in this project, Philip DeCamp, Rony Kubat and Brandon Roy are pictured here. Philip has been a close collaborator on all the visualizations you're seeing. And Michael Fleischman was another Ph.D. student in my lab who worked with me on this home video analysis, and he made the following observation: that "just the way that we're analyzing how language connects to events which provide common ground for language, that same idea we can take out of your home, Deb, and we can apply it to the world of public media." And so our effort took an unexpected turn.
我在MIT的研究室-即是現在看到的- 那是在媒體實驗室裡 從影片汲取任何空間影像 已經成爲我最喜歡的方法 這個專案有三個主要人物 即是影像裡的Philip DeCamp、Rony Kubat和Brandon Roy Philip是大家看到的 影片製作的同事 還有Michael Fleischman是在我實驗室裡的 另一位博士生 他和我一同分析這支家庭影片 他說了以下的意見: 他說「我們分析 語言如何與事件相關聯 以作爲語言的共通基礎 同樣想法也可帶到你家之外,Deb 我們可以用它來分析外面世界的公衆媒體」 結果我們的研究有了料想不到的轉折
Think of mass media as providing common ground and you have the recipe for taking this idea to a whole new place. We've started analyzing television content using the same principles -- analyzing event structure of a TV signal -- episodes of shows, commercials, all of the components that make up the event structure. And we're now, with satellite dishes, pulling and analyzing a good part of all the TV being watched in the United States. And you don't have to now go and instrument living rooms with microphones to get people's conversations, you just tune into publicly available social media feeds.
想到公衆媒體 提供共同的基礎 那就可以將我們的 想法帶到嶄新的境地 於是我們開始採用相同的原則 分析電視的內容- 分析電視訊號的事件結構- 播出節目的分集、 商業廣告、 構成事件結構的所有元件 結果我們現在用衛星碟抽出並分析 相當一大部分在美國被觀看的電視節目 現在不用再到各個客廳去裝設麥克風 來取得人們的談話 只要收聽公衆能取得的社交媒體訊息就行了
So we're pulling in about three billion comments a month, and then the magic happens. You have the event structure, the common ground that the words are about, coming out of the television feeds; you've got the conversations that are about those topics; and through semantic analysis -- and this is actually real data you're looking at from our data processing -- each yellow line is showing a link being made between a comment in the wild and a piece of event structure coming out of the television signal. And the same idea now can be built up. And we get this wordscape, except now words are not assembled in my living room. Instead, the context, the common ground activities, are the content on television that's driving the conversations. And what we're seeing here, these skyscrapers now, are commentary that are linked to content on television. Same concept, but looking at communication dynamics in a very different sphere.
於是我們每個月抽取 大約三十億則電視評論 然後美妙的事發生了 這當中可以找到事件結構 那些字詞內容的共同基礎 從這些電視訊息裡透露出來 我們取得了關於 那些話題的談話 再經過語意分析-大家現在看到的 確實是來自於我們進行 資料處理的真實資料- 每一條黃線顯示一則評論 在外間造成的連結 於是從電視訊號逐漸顯出一點事件的結構 同樣的想法現在 可以用來建構關聯 於是我們得到了這個「字詞風景」 只不過這些字詞並非在我家客廳裡組造出來的 而是情境,即共同基礎的活動 即電視的內容在推動談話 我們現在看到的這些高聳的結構 都是電視評論 在電視內容上有相互關聯 同樣的構想 但請看它在另一個 非常不同的空間所造成的溝通動態
And so fundamentally, rather than, for example, measuring content based on how many people are watching, this gives us the basic data for looking at engagement properties of content. And just like we can look at feedback cycles and dynamics in a family, we can now open up the same concepts and look at much larger groups of people. This is a subset of data from our database -- just 50,000 out of several million -- and the social graph that connects them through publicly available sources. And if you put them on one plain, a second plain is where the content lives. So we have the programs and the sporting events and the commercials, and all of the link structures that tie them together make a content graph. And then the important third dimension. Each of the links that you're seeing rendered here is an actual connection made between something someone said and a piece of content. And there are, again, now tens of millions of these links that give us the connective tissue of social graphs and how they relate to content. And we can now start to probe the structure in interesting ways.
而且深入根本,舉例來說, 與測量收視率所得的結果極爲不同 此研究讓我們得到了 用來檢視內容「佔用特性」的基本資料 如同我們可以檢視一個 家庭裡的反應循環和動態 我們現在可以利用同樣的構想 檢視更大的人群 這是從我們資料庫來的一個子集- 只是透過公衆媒體來源取得的 幾百萬則訊息中的五萬則 以及其間互相關聯的「社交圖」 如果把它們放到一個平面上 另一個平面是內容活躍的地方 於是我們有了節目 和體育運動事件 以及商業廣告 還有所有把它們綁在一起的連結結構 形成了一個「內容圖」 然後是重要的第三個面向 大家在這裡看到的每個連結 是某個人說了某東西 和某一件內容之間的 真實關聯 這裡又有這些關聯的幾千萬條連結 這讓我們看見「社交圖」的「關聯組織」 以及它們是如何與內容相關的情況 我們現在可以開始用 有趣的方式來探索這個結構
So if we, for example, trace the path of one piece of content that drives someone to comment on it, and then we follow where that comment goes, and then look at the entire social graph that becomes activated and then trace back to see the relationship between that social graph and content, a very interesting structure becomes visible. We call this a co-viewing clique, a virtual living room if you will. And there are fascinating dynamics at play. It's not one way. A piece of content, an event, causes someone to talk. They talk to other people. That drives tune-in behavior back into mass media, and you have these cycles that drive the overall behavior.
例如,如果我們追蹤 某一件內容的路徑 那個內容讓某個人對它評論 然後我們隨著那個評論的走向 然後檢視整個啓動的「社交圖」 然後又回頭追蹤查看那個「社交圖」 和內容之間的關係 於是顯現出一個非常有趣的結構 我們稱之爲「共看集團」 要說是一個虛擬客廳也可以 這裡頭上演著引人注目的戲劇 這不是單向的 一件內容,一個事件讓某個人說了話 這讓其他的人有感 那就驅動了大衆傳媒的收視行爲 於是出現了這樣的循環 驅動了整體的收視行爲
Another example -- very different -- another actual person in our database -- and we're finding at least hundreds, if not thousands, of these. We've given this person a name. This is a pro-amateur, or pro-am media critic who has this high fan-out rate. So a lot of people are following this person -- very influential -- and they have a propensity to talk about what's on TV. So this person is a key link in connecting mass media and social media together.
另一個例子-情況很不同 我們的資料庫裡有一位人士- 其實我們可以找到成千上百個例子 我們給這位人士一個名字 這是一位專業業餘者,或專業美國媒體評論員 此人有高度的粉絲收視率 許多人追隨這位人士-他很有影響力- 那些追隨者傾向於在電視上說話 那麽這位人士是關聯大衆傳媒 和社交媒體的一個主要連結
One last example from this data: Sometimes it's actually a piece of content that is special. So if we go and look at this piece of content, President Obama's State of the Union address from just a few weeks ago, and look at what we find in this same data set, at the same scale, the engagement properties of this piece of content are truly remarkable. A nation exploding in conversation in real time in response to what's on the broadcast. And of course, through all of these lines are flowing unstructured language. We can X-ray and get a real-time pulse of a nation, real-time sense of the social reactions in the different circuits in the social graph being activated by content.
這份資料的最後一個例子是: 有時確實是一件特別的內容 因此我們現在來檢視這一件內容 才幾個星期前的歐巴馬總統 國情咨文演講 再檢視我們在這組資料中發現些什麽 用同樣的尺度來衡量 這件內容的「佔用特性」 真是令人驚奇 整個國家頓時同步 爆發了談話 那是對廣播的訊息所做出的反應 當然,所有這些連結線也 也流動著缺乏結構的語言 我們可以在「社交圖」上 透視一下 在不同的圈子裡 這個被這件內容啓動的國家 有怎樣的即時脈動和即時官感
So, to summarize, the idea is this: As our world becomes increasingly instrumented and we have the capabilities to collect and connect the dots between what people are saying and the context they're saying it in, what's emerging is an ability to see new social structures and dynamics that have previously not been seen. It's like building a microscope or telescope and revealing new structures about our own behavior around communication. And I think the implications here are profound, whether it's for science, for commerce, for government, or perhaps most of all, for us as individuals.
總結來說,我們的想法是這樣的: 正當我們的世界變得越來越工具化 我們有能力搜集 並在人們說了些什麽 和他們說話的情境之間 將那些點連結起來 那麽呈現的將會是洞悉 社交結構和社交動態的新視野 那將是前所未有的能力 這像是製造了麥克風 和望遠鏡而顯現了 我們的溝通行爲的新結構 我認爲其中隱含深遠的意義 無論是對科學而言 對商業而言,對政府而言 或也許最重要的是 對我們個人而言
And so just to return to my son, when I was preparing this talk, he was looking over my shoulder, and I showed him the clips I was going to show to you today, and I asked him for permission -- granted. And then I went on to reflect, "Isn't it amazing, this entire database, all these recordings, I'm going to hand off to you and to your sister" -- who arrived two years later -- "and you guys are going to be able to go back and re-experience moments that you could never, with your biological memory, possibly remember the way you can now?" And he was quiet for a moment. And I thought, "What am I thinking? He's five years old. He's not going to understand this." And just as I was having that thought, he looked up at me and said, "So that when I grow up, I can show this to my kids?" And I thought, "Wow, this is powerful stuff."
那麽我們回到我的兒子 我在準備這個談話時,他就在我身後看著 我讓他看今天給大家看的短片 也徵求他的准許-他同意了 然後我開始醒思 「這不是很令人訝異的嗎? 這整個資料庫,所有這些錄影紀錄 我把它們交給你和妹妹」 妹妹晚了兩年出生 「你們倆將能夠回顧並重溫 你們的生物記憶可能 不會記得的那些時刻」 他沈默了半响 我想「我想到哪裡去了 他不過才五歲,不會懂的」 我才剛這麽想,他抬頭看著我 說:「那麽,我長大了 可以讓我的孩子看這個?」 我想「哇,這說得可真好」
So I want to leave you with one last memorable moment from our family. This is the first time our son took more than two steps at once -- captured on film. And I really want you to focus on something as I take you through. It's a cluttered environment; it's natural life. My mother's in the kitchen, cooking, and, of all places, in the hallway, I realize he's about to do it, about to take more than two steps. And so you hear me encouraging him, realizing what's happening, and then the magic happens. Listen very carefully. About three steps in, he realizes something magic is happening, and the most amazing feedback loop of all kicks in, and he takes a breath in, and he whispers "wow" and instinctively I echo back the same. And so let's fly back in time to that memorable moment.
那麽,我要給各位 留下最後一個 我們家值得回憶的時刻 這是我兒子第一次 走出兩步以上的情況- 拍攝在影片裡 我真的希望讓大家看的時候 要注意到其中一點 周遭有點吵,這是自然的生活環境 我媽在廚房做飯 就在走道上 我感覺到他就要走出兩步以上 因此各位聽到我鼓勵他 感到有事就要發生 然後美妙的事發生了 請仔細聽 大概在走三步後 他感到發生了美妙的事 這時最令人訝異的反應循環全都作動了 他鬆了一口氣 輕輕地說了「哇」 我直覺反應地也說了同樣的話 我們現在重回那個時光 回到那個難忘的時刻
(Video) DR: Hey. Come here. Can you do it? Oh, boy. Can you do it? Baby: Yeah. DR: Ma, he's walking.
(影片) DR:喂 來,過來 你辦得到嗎? 啊,孩子 你辦得到嗎? 嬰孩:可以 DR:媽,他走路了
(Laughter)
(笑聲)
(Applause)
(掌聲)
DR: Thank you.
DR:謝謝大家
(Applause)
(掌聲)