Ever since computers were invented, we've been trying to make them smarter and more powerful. From the abacus, to room-sized machines, to desktops, to computers in our pockets. And are now designing artificial intelligence to automate tasks that would require human intelligence. If you look at the history of computing, we've always treated computers as external devices that compute and act on our behalf. What I want to do is I want to weave computing, AI and internet as part of us. As part of human cognition, freeing us to interact with the world around us. Integrate human and machine intelligence right inside our own bodies to augment us, instead of diminishing us or replacing us.
自從發明電腦以來, 我們一直嘗試使它們 更聰明、更強大。 從算盤到房間大小的電腦, 再從桌上型到能放進口袋裡的電腦。 如今正在發展中的人工智慧 能夠自動執行 需要人類智慧的任務。 從計算機的演變史來看, 電腦始終被視為 代替我們計算和執行的外部設備。 我想要把計算、人工智慧 和網路,整合成我們內在的一部分, 成為人類認知過程的一部分, 讓我們自由地與周圍的世界互動。 將人類與機器智慧結合起來, 從內在去補強我們, 而不是削弱或取代我們。
Could we combine what people do best, such as creative and intuitive thinking, with what computers do best, such as processing information and perfectly memorizing stuff? Could this whole be better than the sum of its parts?
我們能不能把人類的強項, 例如創造性和直覺思維, 與電腦的強項, 例如資訊處理和完美記憶, 結合在一起呢? 一旦整合,是否比各項優勢 加總起來更好呢?
We have a device that could make that possible. It's called AlterEgo, and it's a wearable device that gives you the experience of a conversational AI that lives inside your head, that you could talk to in likeness to talking to yourself internally. We have a new prototype that we're showing here, for the first time at TED, and here's how it works.
我們做了一個設備來實現這個理想。 它叫做 AlterEgo (改變自我), 是一種穿戴式設備, 藉由它你可以體驗與你腦中的 人工智慧對話, 就像你與內在自我對話那樣容易。 今天展示的是一個新的原型機, 第一次出現在 TED , 這是它的原理。
Normally, when we speak, the brain sends neurosignals through the nerves to your internal speech systems, to activate them and your vocal cords to produce speech. One of the most complex cognitive and motor tasks that we do as human beings. Now, imagine talking to yourself without vocalizing, without moving your mouth, without moving your jaw, but by simply articulating those words internally. Thereby very subtly engaging your internal speech systems, such as your tongue and back of your palate. When that happens, the brain sends extremely weak signals to these internal speech systems.
通常,當我們說話時, 大腦發送神經信號,通過神經, 到達你內部的語音系統, 使你的聲帶動起來,發出話語; 那是我們人類所做 最複雜的認知和動作之一。 現在想像與自己對話, 不出聲,不動嘴, 也不動下巴, 就只在心中默念那樣。 從而很巧妙地啟動 你內部的語音系統, 像是你的舌頭和軟顎。 當你這樣做時, 大腦會向內部語音系統 發送極弱的信號。
AlterEgo has sensors, embedded in a thin plastic, flexible and transparent device that sits on your neck just like a sticker. These sensors pick up on these internal signals sourced deep within the mouth cavity, right from the surface of the skin. An AI program running in the background then tries to figure out what the user's trying to say. It then feeds back an answer to the user by means of bone conduction, audio conducted through the skull into the user's inner ear, that the user hears, overlaid on top of the user's natural hearing of the environment, without blocking it.
AlterEgo 的感測器嵌在 一個有彈性、透明塑膠薄片裡, 像張貼紙貼在脖子上。 這些感測器從我們的皮膚表面 接收來自口腔深處的內部信號。 在後台運行的人工智慧程式 負責解讀使用者在表達什麼。 然後它透過骨傳導耳機 向使用者提供答案。 使用者聽到的聲音 透過顱骨傳到使用者內耳中, 與我們接收周遭環境的聽覺重疊, 兩者並行毫無阻礙。
The combination of all these parts, the input, the output and the AI, gives a net subjective experience of an interface inside your head that you could talk to in likeness to talking to yourself. Just to be very clear, the device does not record or read your thoughts. It records deliberate information that you want to communicate through deliberate engagement of your internal speech systems. People don't want to be read, they want to write. Which is why we designed the system to deliberately record from the peripheral nervous system. Which is why the control in all situations resides with the user.
這一整個輸入、輸出 和人工智慧的組合, 讓你對這個腦中的界面 得以有一個純然的主觀體驗, 如同你與自己交談那樣。 我要清楚指出,該設備 不會記錄或閱讀你的想法。 它只會記錄你選擇要 透過內部語音系統 去溝通的特定信息。 人們不想被讀心,他們想要表達。 這就是為什麼我們將這系統 設計成只去記錄從 周邊神經系統來的訊息。 無論何種情況,主控權 都在使用者手上。
I want to stop here for a second and show you a live demo. What I'm going to do is, I'm going to ask Eric a question. And he's going to search for that information without vocalizing, without typing, without moving his fingers, without moving his mouth. Simply by internally asking that question. The AI will then figure out the answer and feed it back to Eric, through audio, through the device. While you see a laptop in front of him, he's not using it. Everything lives on the device. All he needs is that sticker device to interface with the AI and the internet. So, Eric, what's the weather in Vancouver like, right now? What you see on the screen are the words that Eric is speaking to himself right now. This is happening in real time.
我先暫停一下,來做一個現場示範。 我要問艾瑞克一個問題。 他搜索答案時 不能出聲、不能打字、也不能動手、 同時也不開口。 純粹就是在內心裡提問。 人工智慧會找出答案, 通過語音和設備, 反饋給艾瑞克。 他前面有一台筆記型電腦, 可是他不會用到。 所有一切都在裝置裡。 只要那片貼紙裝置能與人工智慧 和網路去互動就夠了。 卡普爾:艾瑞克,此刻 溫哥華的天氣怎樣? 螢幕上的這些字 是此刻艾瑞克與他自己的對話。 這是即時同步的紀錄。
Eric: It's 50 degrees and rainy here in Vancouver.
艾瑞克:溫哥華這裡 50 度,下雨。
Arnav Kapur: What happened is that the AI sent the answer through audio, through the device, back to Eric.
卡普爾:剛剛這是人工智慧 透過語音及裝置將答案發送給艾瑞克。
What could the implications of something like this be? Imagine perfectly memorizing things, where you perfectly record information that you silently speak, and then hear them later when you want to, internally searching for information, crunching numbers at speeds computers do, silently texting other people. Suddenly becoming multilingual, so that you internally speak in one language, and hear the translation in your head in another. The potential could be far-reaching.
這樣的裝置會帶來什麼影響? 想像分毫不差的記憶力, 只要無聲地默念即可完美地記錄一切, 然後在需要時得以聆聽; 還能向內搜索訊息, 以電腦的速度處理數字, 以及無聲地給他人發出簡訊。 還有突然通曉多種語言, 你在心裡用一種語言說話, 立即就在腦中聽到 另一種語言的翻譯。 潛力深遠。
There are millions of people around the world who struggle with using natural speech. People with conditions such as ALS, or Lou Gehrig's disease, stroke and oral cancer, amongst many other conditions. For them, communicating is a painstakingly slow and tiring process.
全世界有數百萬人 喪失與生俱來的語言能力。 例如肌萎縮側索硬化或稱盧·賈里格症 (俗稱漸凍人) 的患者, 還有中風及口腔癌患者, 以及許多其他疾病的患者。 對他們來說,溝通變得 艱苦、緩慢而累人。
This is Doug. Doug was diagnosed with ALS about 12 years ago and has since lost the ability to speak. Today, he uses an on-screen keyboard where he types in individual letters using his head movements. And it takes several minutes to communicate a single sentence. So we went to Doug and asked him what were the first words he'd like to use or say, using our system. Perhaps a greeting, like, "Hello, how are you?" Or indicate that he needed help with something. What Doug said that he wanted to use our system for is to reboot the old system he had, because that old system kept on crashing.
這位是道格。 大約 12 年前,道格 被診斷為漸凍人, 從此失去了說話的能力。 如今他使用螢幕鍵盤, 用頭部動作將英文字母逐一輸入。 一個句子得花掉好幾分鐘。 我們去找道格, 如果用我們的系統, 他要說的第一句話是什麼。 或許是「哈囉,你好嗎?」的問候 或者表達他需要什麼樣的幫助。 道格說他可以用我們的系統 去重新開機,因為他那個 舊系統老是當機。
(Laughter)
(笑聲)
We never could have predicted that. I'm going to show you a short clip of Doug using our system for the first time.
這完全出乎我們的意料。 我要給你看道格第一次 使用我們系統的短片。
(Voice) Reboot computer.
(語音)重新啟動電腦。
AK: What you just saw there was Doug communicating or speaking in real time for the first time since he lost the ability to speak. There are millions of people who might be able to communicate in real time like Doug, with other people, with their friends and with their families. My hope is to be able to help them express their thoughts and ideas.
卡普爾:你剛才看到的 是道格失去說話的能力後 第一次能夠即時地溝通和說話。 有數百萬人或許可以像道格這樣, 即時地與其他人、朋友和家人溝通。 我希望能幫助他們表達想法和意見。
I believe computing, AI and the internet would disappear into us as extensions of our cognition, instead of being external entities or adversaries, amplifying human ingenuity, giving us unimaginable abilities and unlocking our true potential. And perhaps even freeing us to becoming better at being human.
我相信計算機,人工智慧和網路 能夠內化成我們認知的延伸, 不再是外部裝置或競爭對手, 它們可以擴大人類的聰明才智, 賦予我們難以想像的能力, 而且釋放我們的真正潛力。 甚至可能還解放我們, 去善盡作為人類的本分。
Thank you so much.
非常感謝。
(Applause)
(掌聲)
Shoham Arad: Come over here. OK. I want to ask you a couple of questions, they're going to clear the stage. I feel like this is amazing, it's innovative, it's creepy, it's terrifying. Can you tell us what I think ... I think there are some uncomfortable feelings around this. Tell us, is this reading your thoughts, will it in five years, is there a weaponized version of this, what does it look like?
秀涵 · 阿瑞德:過來這裡。 好。 我想問你幾個問題, 他們要清理那邊的舞台。 我覺得這很棒、很有創意, 也有點毛骨悚然、令人恐懼。 你能告訴我們為什麼—— 我覺得這個發明引發 了一種令人不自在的感覺。 請告訴我們,這個裝置是像讀心術嗎? 是否在五年之內, 會發展出一個武器化的版本? 屆時它又會是什麼樣子?
AK: So our first design principle, before we started working on this, was to not render ethics as an afterthought. So we wanted to bake ethics right into the design. We flipped the design. Instead of reading from the brain directly, we're reading from the voluntary nervous system that you deliberately have to engage to communicate with the device, while still bringing the benefits of a thinking or a thought device. The best of both worlds in a way.
卡普爾:我們開始研究前的 第一個設計原則就是 不要等到最後才考慮道德問題。 因此我們一開始就把 道德規範納入其中。 我們翻轉了設計。 與其直接從腦部讀取資料, 改成從自主神經系統去讀取, 由你決定與裝置分享溝通那些資訊, 仍能享有思考裝置的好處。 將人工與人類智慧結合得兩全其美。
SA: OK, I think people are going to have a lot more questions for you. Also, you said that it's a sticker. So right now it sits just right here? Is that the final iteration, what the final design you hope looks like?
阿瑞德:嗯,我想大家一定 有更多問題要問你。 此外,你說它是貼紙。 現在就貼這裡嗎? 這是目前最新的版本嗎? 你希望最終的設計是什麼樣的?
AK: Our goal is for the technology to disappear completely.
卡普爾:我們的目標是讓技術隱身。
SA: What does that mean?
阿瑞德:什麼意思?
AK: If you're wearing it, I shouldn't be able to see it. You don't want technology on your face, you want it in the background, to augment you in the background. So we have a sticker version that conforms to the skin, that looks like the skin, but we're trying to make an even smaller version that would sit right here.
卡普爾:意思是, 你穿戴著它,我卻看不出來。 因為沒有人希望它清楚秀在你的臉上, 而是要它隱身於後,增強你的能力。 所以我們做了一個貼紙版本, 看起來像皮膚。 但我們正努力製作更小的版本, 可以貼在這裡。
SA: OK. I feel like if anyone has any questions they want to ask Arnav, he'll be here all week. OK, thank you so much, Arnav.
阿瑞德:好的。 我覺得如果每個人都能提問的話, 阿納夫可能得在這裡待上一個星期。 非常感謝你,阿納夫。
AK: Thanks, Shoham.
卡普爾:謝謝秀涵。