Greg Gage: Mind-reading. You've seen this in sci-fi movies: machines that can read our thoughts. However, there are devices today that can read the electrical activity from our brains. We call this the EEG. Is there information contained in these brainwaves? And if so, could we train a computer to read our thoughts?
葛列格·蓋奇 (GG):讀心術。 你在科幻電影中曾看過: 可以讀出我們想法的機器。 然而,如今有很多機器 可以讀出我們大腦中的電波。 我們稱之為 「腦波圖」。 這些腦波中含有資訊嗎? 如果含有資訊,
My buddy Nathan has been working to hack the EEG
我們可以訓練電腦 讀懂我們的思想嗎?
to build a mind-reading machine.
我的好友內森一直 致力研究如何破解腦波圖 以建造一台可以讀心的機器。
[DIY Neuroscience]
【DIY 神經科學】
So this is how the EEG works. Inside your head is a brain, and that brain is made out of billions of neurons. Each of those neurons sends an electrical message to each other. These small messages can combine to make an electrical wave that we can detect on a monitor. Now traditionally, the EEG can tell us large-scale things, for example if you're asleep or if you're alert. But can it tell us anything else? Can it actually read our thoughts? We're going to test this, and we're not going to start with some complex thoughts. We're going to do something very simple. Can we interpret what someone is seeing using only their brainwaves?
先介紹一下腦波圖的原理。 你的頭裡有大腦, 而大腦是由數十億個神經元構成。 每個神經元都在互相傳送電子訊息。 這些微小的訊息可以結合在一起, 形成顯示器上探測到的電波。 傳統來說, 腦波圖能告訴我們大維度的事情, 例如你是睡著還是清醒著。 但是它可以告訴我們其它事情嗎? 它是否能夠讀出我們心中所想? 我們要測試這一點, 而我們不會從一些複雜的想法開始。 我們會從一些非常簡單的事情開始。 我們只需要依據腦波 就可以判讀一個人看到了什麼嗎?
Nathan's going to begin by placing electrodes on Christy's head.
內森先在克莉絲蒂的頭上安裝電極。
Nathan: My life is tangled.
內森:我的生命亂成一團。
(Laughter)
(笑聲)
GG: And then he's going to show her a bunch of pictures from four different categories.
GG:之後他會給她看一些圖片, 圖片來自四個不同類別。
Nathan: Face, house, scenery and weird pictures.
內森:面孔、房子、 風景、古怪的圖片。
GG: As we show Christy hundreds of these images, we are also capturing the electrical waves onto Nathan's computer. We want to see if we can detect any visual information about the photos contained in the brainwaves, so when we're done, we're going to see if the EEG can tell us what kind of picture Christy is looking at, and if it does, each category should trigger a different brain signal.
GG:當我們向克莉絲蒂 展示數百張這種圖片時, 內森的電腦捕捉了她的腦波。 我們想知道我們能否偵測 腦波中有關這些圖片的視覺資訊。 當實驗結束後, 我們將會看到腦波圖是否可以 告訴我們克莉絲蒂在看哪種圖片, 即是不同種類的圖片, 會否觸發不同的大腦信號。
OK, so we collected all the raw EEG data, and this is what we got. It all looks pretty messy, so let's arrange them by picture. Now, still a bit too noisy to see any differences, but if we average the EEG across all image types by aligning them to when the image first appeared, we can remove this noise, and pretty soon, we can see some dominant patterns emerge for each category.
我們收集完了所有原始腦波圖資料, 這是我們的成果。 看上去很混亂, 我們來根據圖片類別分類。 現在,還是有點太雜亂, 無法看出任何區別, 但是如果將圖片出現的時間對齊, 並對每種類別的腦波圖取平均值, 就能移除其中的雜亂。 很快我們便從各個類別中 看到一些主要的規律。
Now the signals all still look pretty similar. Let's take a closer look. About a hundred milliseconds after the image comes on, we see a positive bump in all four cases, and we call this the P100, and what we think that is is what happens in your brain when you recognize an object. But damn, look at that signal for the face. It looks different than the others. There's a negative dip about 170 milliseconds after the image comes on.
現在這些信號看起來 仍然是很相似。 讓我們再仔細看看。 大約在一張圖片出現一百毫秒後, 我們在四個類別中 都看到了正向波動, 我們把它叫作 P100 , 我們認為這是當你識別物體時 大腦中發生的活動。 但糟糕了,看看面孔圖片的信號, 它看起來與其他的不同, 在圖片出現後約 170 毫秒, 出現了負向波動。
What could be going on here? Research shows that our brain has a lot of neurons that are dedicated to recognizing human faces, so this N170 spike could be all those neurons firing at once in the same location, and we can detect that in the EEG.
這裡可能發生了什麼事? 研究顯示, 我們大腦有大量神經元 專門負責識別人類的面孔, 所以這個 N170 腦電負成分 可能是由這些神經元產生 在同一個地方同時啟動, 而我們可以在腦波圖中探測到。
So there are two takeaways here. One, our eyes can't really detect the differences in patterns without averaging out the noise, and two, even after removing the noise, our eyes can only pick up the signals associated with faces.
於是這裡有兩個結論: 第一,在沒有進行平均法降噪時, 我們的眼睛不能識別 腦波規律的不同; 第二,即使移除雜訊後, 我們的眼睛也只能 識別出和面孔有關的信號。
So this is where we turn to machine learning. Now, our eyes are not very good at picking up patterns in noisy data, but machine learning algorithms are designed to do just that, so could we take a lot of pictures and a lot of data and feed it in and train a computer to be able to interpret what Christy is looking at in real time?
於是我們在此轉而借助機器學習。 我們的眼睛並不擅長 在雜訊中發現規律, 但是機器學習演算法的設計 可以解決這類問題。 所以我們能否將許多圖片和資料 輸入到電腦中進行訓練, 從而即時判斷克莉絲蒂 究竟正在看什麼?
We're trying to code the information that's coming out of her EEG in real time and predict what it is that her eyes are looking at. And if it works, what we should see is every time that she gets a picture of scenery, it should say scenery, scenery, scenery, scenery. A face -- face, face, face, face, but it's not quite working that way, is what we're discovering.
我們嘗試將她的腦波圖資訊 進行即時編碼, 並預測她眼睛在看什麼東西。 如果這樣有效, 我們應該能看到 每當她看到風景的圖片時, 機器顯示風景、風景、風景、風景, 看到面孔── 面孔、面孔、面孔、面孔, 但是我們發現,實際上並非如此。
(Laughter)
(笑聲)
OK.
好的。
Director: So what's going on here? GG: We need a new career, I think.
導演:怎麼了? GG:我覺得我們應該轉行。
(Laughter)
(笑聲)
OK, so that was a massive failure. But we're still curious: How far could we push this technology? And we looked back at what we did. We noticed that the data was coming into our computer very quickly, without any timing of when the images came on, and that's the equivalent of reading a very long sentence without spaces between the words. It would be hard to read, but once we add the spaces, individual words appear and it becomes a lot more understandable.
好吧,所以剛剛那個是重大失敗。 但是我們依然好奇: 我們能將這項技術推展到多遠? 於是我們回顧做法。 我們發現資料飛快湧入電腦, 但沒有對圖片出現的時間進行計時, 這等同於讀一個 在單詞間沒有空格的長句。 這樣的句子很難讀懂, 但是只要我們添加了空格, 我們就能看到獨立的單詞, 句子也就變得容易理解得多,
But what if we cheat a little bit? By using a sensor, we can tell the computer when the image first appears. That way, the brainwave stops being a continuous stream of information, and instead becomes individual packets of meaning. Also, we're going to cheat a little bit more, by limiting the categories to two. Let's see if we can do some real-time mind-reading.
但如果我們作一點弊呢? 透過使用感測器, 我們能告訴電腦 每張圖片出現的時間。 這樣,腦波就不再是 一個沒有間斷的資訊串流, 而是變成了一個個有意義的封包。 另外,我們還要再作弊一下, 把圖片限制到兩個類別。 讓我們看看我們是否能夠即時讀心。
In this new experiment, we're going to constrict it a little bit more so that we know the onset of the image and we're going to limit the categories to "face" or "scenery."
在這個新實驗中, 我們將限制實驗條件, 這樣我們就會知道圖片出現的時間, 並將類別限制為「面孔」或「風景」。
Nathan: Face. Correct. Scenery. Correct.
內森:面孔,正確。 風景,正確。
GG: So right now, every time the image comes on, we're taking a picture of the onset of the image and decoding the EEG. It's getting correct.
GG:所以現在每當圖片出現時, 我們對圖片出現的時刻進行記錄, 並對腦波圖解碼。 它變得越來越正確。
Nathan: Yes. Face. Correct.
內森:是的,面孔,正確。
GG: So there is information in the EEG signal, which is cool. We just had to align it to the onset of the image.
GG:所以腦波圖的信號中 包含資訊,這很棒。 我們僅僅需要把它 和圖片出現的時刻對齊。
Nathan: Scenery. Correct. Face. Yeah.
內森:風景,正確。 面孔,沒錯。
GG: This means there is some information there, so if we know at what time the picture came on, we can tell what type of picture it was, possibly, at least on average, by looking at these evoked potentials.
GG:這意味著它包含了一些資訊, 如果我們知道圖片出現的時間, 我們就有可能判斷 它是哪個類別的圖片, 至少一般可以做到, 只要根據這些由圖片誘發的電位。
Nathan: Exactly.
內森:說得沒錯。
GG: If you had told me at the beginning of this project this was possible, I would have said no way. I literally did not think we could do this.
GG:如果你一開始跟我說, 這個計畫有可能實現, 我會說,怎麼可能。 我真的覺得我們不可能做到。
Did our mind-reading experiment really work? Yes, but we had to do a lot of cheating. It turns out you can find some interesting things in the EEG, for example if you're looking at someone's face, but it does have a lot of limitations. Perhaps advances in machine learning will make huge strides, and one day we will be able to decode what's going on in our thoughts. But for now, the next time a company says that they can harness your brainwaves to be able to control devices, it is your right, it is your duty to be skeptical.
我們的讀心術實驗真的成功了嗎? 成功了,但是我們必須作很多弊。 結果就是,你能透過腦波圖 發現一些有趣的事, 比如你是否在看某人的臉, 但它確實有很多限制。 也許機器學習領域的進步 會帶來更多重大突破。 有朝一日,我們能夠解碼心中所想。 可是就現在來說, 當有公司說它能利用 你的腦波控制設備, 保持懷疑是你的權利和責任。