We are built out of very small stuff, and we are embedded in a very large cosmos, and the fact is that we are not very good at understanding reality at either of those scales, and that's because our brains haven't evolved to understand the world at that scale.
我们由极其微小的物质构成, 又生存在无限大的宇宙当中, 然而实际上,无论从这两者中的 哪一个尺度我们都无法 很好地理解这个世界, 这是因为我们的大脑 还没有进化到能从那样的尺度 感知世界的程度。
Instead, we're trapped on this very thin slice of perception right in the middle. But it gets strange, because even at that slice of reality that we call home, we're not seeing most of the action that's going on. So take the colors of our world. This is light waves, electromagnetic radiation that bounces off objects and it hits specialized receptors in the back of our eyes. But we're not seeing all the waves out there. In fact, what we see is less than a 10 trillionth of what's out there. So you have radio waves and microwaves and X-rays and gamma rays passing through your body right now and you're completely unaware of it, because you don't come with the proper biological receptors for picking it up. There are thousands of cell phone conversations passing through you right now, and you're utterly blind to it.
反之,我们的视角被局限在了 这两者中间, 一块非常狭窄的范围内。 奇怪的是,即便我们对眼前的一切 再熟悉不过, 其中绝大部分信息仍然 对我们是不可见的。 看看现实中的色彩吧。 它们都是从物体反射的 光波和电磁辐射, 由我们眼球后部特定的感受器接收。 但是我们看不到全部的波。 事实上,我们能看到的波 还不到全部的十万亿分之一。 如果无线电波、微波 还有X射线和伽玛射线正穿过你的身体, 你是完全意识不到它们的存在的。 因为人类没有进化出 可以感知这些波的 生物感受器。 此时此刻,正有成千上万个手机信号, 在你身边穿梭。 然而你完全毫无察觉。
Now, it's not that these things are inherently unseeable. Snakes include some infrared in their reality, and honeybees include ultraviolet in their view of the world, and of course we build machines in the dashboards of our cars to pick up on signals in the radio frequency range, and we built machines in hospitals to pick up on the X-ray range. But you can't sense any of those by yourself, at least not yet, because you don't come equipped with the proper sensors.
这些波并不是本身就不可见的。 蛇可以看到红外线, 蜜蜂可以看到紫外线, 我们也当然能在汽车的 仪表盘内设立装置 来接收无线电信号收听广播, 医院里也有专门的X射线设备。 但是只凭人类自身的感官 是绝对感知不到这些波的, 至少目前不能。 因为你的身体天生就 没有配备这种传感器。
Now, what this means is that our experience of reality is constrained by our biology, and that goes against the common sense notion that our eyes and our ears and our fingertips are just picking up the objective reality that's out there. Instead, our brains are sampling just a little bit of the world.
这意味着我们对现实的感知能力 被我们的生物特性束缚着。 我们的眼睛、耳朵还有指尖 若能感知所有客观存在的现实, 就违背常识了。 相反的,我们的大脑只对 现实世界的一小部分“浅尝辄止”。
Now, across the animal kingdom, different animals pick up on different parts of reality. So in the blind and deaf world of the tick, the important signals are temperature and butyric acid; in the world of the black ghost knifefish, its sensory world is lavishly colored by electrical fields; and for the echolocating bat, its reality is constructed out of air compression waves. That's the slice of their ecosystem that they can pick up on, and we have a word for this in science. It's called the umwelt, which is the German word for the surrounding world. Now, presumably, every animal assumes that its umwelt is the entire objective reality out there, because why would you ever stop to imagine that there's something beyond what we can sense. Instead, what we all do is we accept reality as it's presented to us.
在整个动物王国, 不同类的动物感知着不一样的现实。 对于又瞎又聋的壁虱来说, 温度和丁酸是重要的信息来源; 黑魔鬼刀鱼的世界, 则被电场施以丰富的颜色; 回声定位蝙蝠的世界, 则是由空气压缩波组成的。 对于他们所熟悉的生态系统的组成, 有一个专门的科学名词来定义, 名叫“环境”(umwelt), 这是一个德语词,意为我们周围的世界。 也许每一种动物都假设, 它的“环境”就是整个客观现实, 因为很难想象, 在我们能感受到的对象之外 还有其他存在。 事实上,我们只接受 现实呈现给我们的信息。
Let's do a consciousness-raiser on this. Imagine that you are a bloodhound dog. Your whole world is about smelling. You've got a long snout that has 200 million scent receptors in it, and you have wet nostrils that attract and trap scent molecules, and your nostrils even have slits so you can take big nosefuls of air. Everything is about smell for you. So one day, you stop in your tracks with a revelation. You look at your human owner and you think, "What is it like to have the pitiful, impoverished nose of a human? (Laughter) What is it like when you take a feeble little noseful of air? How can you not know that there's a cat 100 yards away, or that your neighbor was on this very spot six hours ago?" (Laughter)
让我们一起来提高这方面的意识。 设想你是一只侦查犬。 你的全部世界就是闻气味。 你的长鼻子拥有2亿个气味接受器, 你的湿鼻孔可以捕捉气味分子, 你的鼻孔甚至有狭缝, 这样你就可以大口的嗅空气。 对你来说,一切东西都是气味。 某天,你停在路上,若有所思, 你看着你的人类主人,并想到, “拥有一只可怜的,几乎没什么用的 人类鼻子会是怎样的情景?” (笑声) 只能微弱地嗅着空气, 会是怎样的感觉? 你怎么能不知道100码之外有只猫, 或者你的邻居6小时前也在这个地方?“ (笑声)
So because we're humans, we've never experienced that world of smell, so we don't miss it, because we are firmly settled into our umwelt. But the question is, do we have to be stuck there? So as a neuroscientist, I'm interested in the way that technology might expand our umwelt, and how that's going to change the experience of being human.
因为我们是人类, 我们从未感知过那个嗅觉世界, 所以我们不会怀念它, 因为我们坚定地生活在我们的”环境“中。 但问题是,我们必须困在这个"环境"中吗? 作为一个神经学家,我对通过技术手段 拓宽我们的"环境", 以及如何改变我们人类的体验, 非常感兴趣。
So we already know that we can marry our technology to our biology, because there are hundreds of thousands of people walking around with artificial hearing and artificial vision. So the way this works is, you take a microphone and you digitize the signal, and you put an electrode strip directly into the inner ear. Or, with the retinal implant, you take a camera and you digitize the signal, and then you plug an electrode grid directly into the optic nerve. And as recently as 15 years ago, there were a lot of scientists who thought these technologies wouldn't work. Why? It's because these technologies speak the language of Silicon Valley, and it's not exactly the same dialect as our natural biological sense organs. But the fact is that it works; the brain figures out how to use the signals just fine.
我们已经知道我们可以把 技术和生物学相结合, 因为已经有成千上万的人 通过人造助听助视生活。 它们的工作原理就是, 拿一个话筒,将信号数字化, 把电极条放入内耳。 或者,通过视网膜植入,拿一个相机, 将信号数字化,然后把电极网格 直接插入视神经。 仅仅在在15年前, 许多科学家都还认为这些技术不会成功。 为什么?因为这些技术带有硅谷的风格, 与自然生物感官的原理并不完全一样。 但事实是它成功了; 大脑知道怎么恰到好处的 处理这些信号。
Now, how do we understand that? Well, here's the big secret: Your brain is not hearing or seeing any of this. Your brain is locked in a vault of silence and darkness inside your skull. All it ever sees are electrochemical signals that come in along different data cables, and this is all it has to work with, and nothing more. Now, amazingly, the brain is really good at taking in these signals and extracting patterns and assigning meaning, so that it takes this inner cosmos and puts together a story of this, your subjective world.
那么我们怎么理解它呢? 这是其中的秘密: 你的大脑并没有听或者看到任何东西。 你的大脑被封闭在无声黑暗的脑壳中。 它看到的一切都是电化学信号, 这些信号来自不同的数据连线, 这就是大脑要处理的全部东西, 除此之外并无其他。 不可思议的是, 大脑很擅长于接受这些信号, 提取模式,赋予含义, 于是它形成了这种 内部环境来整合信息, 并组成了你的主观世界。
But here's the key point: Your brain doesn't know, and it doesn't care, where it gets the data from. Whatever information comes in, it just figures out what to do with it. And this is a very efficient kind of machine. It's essentially a general purpose computing device, and it just takes in everything and figures out what it's going to do with it, and that, I think, frees up Mother Nature to tinker around with different sorts of input channels.
但关键在于: 你的大脑不知道,而且它也不在乎, 它是从哪里得到的信息。 不管什么信息进入大脑, 它都会做相应的处理。 这是个非常有效的机器。 它本质上就是个普通的计算工具, 它只是接受一切信息, 然后做相应的处理, 我认为那使得大自然母亲得以解放, 而不用修补不同的输入渠道。
So I call this the P.H. model of evolution, and I don't want to get too technical here, but P.H. stands for Potato Head, and I use this name to emphasize that all these sensors that we know and love, like our eyes and our ears and our fingertips, these are merely peripheral plug-and-play devices: You stick them in, and you're good to go. The brain figures out what to do with the data that comes in. And when you look across the animal kingdom, you find lots of peripheral devices. So snakes have heat pits with which to detect infrared, and the ghost knifefish has electroreceptors, and the star-nosed mole has this appendage with 22 fingers on it with which it feels around and constructs a 3D model of the world, and many birds have magnetite so they can orient to the magnetic field of the planet. So what this means is that nature doesn't have to continually redesign the brain. Instead, with the principles of brain operation established, all nature has to worry about is designing new peripherals.
我称之为“P.H. 进化模型“, 这里我不想涉及太技术的层面, 但是P.H. 代表薯头, 我使用这个名称来强调所有 我们知道的以及热爱的传感器, 像我们的眼睛,耳朵,以及指尖, 而这些仅仅是外围的 即插即用的设备: 插上它们,即可使用。 由大脑来处理所有输入的数据。 当你浏览动物王国, 你会发现很多外围设备。 比如蛇的面部拥有能 探测红外线的感热小坑, 魔鬼刀鱼有电接收器, 星鼻鼹鼠拥有 带有22个指头的附器, 让它能感受周边环境并 构建出三维世界, 许多鸟类拥有磁感应的本领, 所以它们能够 通过地球的磁场确定方向。 因此这意味着大自然不必继续 重新设计大脑。 相反,随着大脑工作原理的建立, 大自然只需要设计新的外围设备。
Okay. So what this means is this: The lesson that surfaces is that there's nothing really special or fundamental about the biology that we come to the table with. It's just what we have inherited from a complex road of evolution. But it's not what we have to stick with, and our best proof of principle of this comes from what's called sensory substitution. And that refers to feeding information into the brain via unusual sensory channels, and the brain just figures out what to do with it.
好的。那么这意味着: 有个显而易见的结论, 就是关于我们讨论的生物学 并没有涉及任何特殊或者基本的东西。 这一切仅仅是我们 从复杂进化旅程中继承的。 但是我们并非只能永远维持现状, 我们最好的佐证原理 来自我们所谓的感官替代。 指通过特殊的感官渠道, 给大脑提供信息, 大脑就会自动做相应的处理。
Now, that might sound speculative, but the first paper demonstrating this was published in the journal Nature in 1969. So a scientist named Paul Bach-y-Rita put blind people in a modified dental chair, and he set up a video feed, and he put something in front of the camera, and then you would feel that poked into your back with a grid of solenoids. So if you wiggle a coffee cup in front of the camera, you're feeling that in your back, and amazingly, blind people got pretty good at being able to determine what was in front of the camera just by feeling it in the small of their back. Now, there have been many modern incarnations of this. The sonic glasses take a video feed right in front of you and turn that into a sonic landscape, so as things move around, and get closer and farther, it sounds like "Bzz, bzz, bzz." It sounds like a cacophony, but after several weeks, blind people start getting pretty good at understanding what's in front of them just based on what they're hearing. And it doesn't have to be through the ears: this system uses an electrotactile grid on the forehead, so whatever's in front of the video feed, you're feeling it on your forehead. Why the forehead? Because you're not using it for much else.
这听上去很抽象, 但是第一篇阐述上述原理的文章 发表在1969年的《自然》杂志。 一个叫做Paul Bach-y-Rita的科学家 把盲人置于一个改装过的 牙科手术椅上, 并搭建了一个录像装置, 他在摄像机前放某个东西, 并在你背部垫上一个螺线管的网格, 这样你就可以感受到那个东西。 比如,如果你在相机前摆动一个咖啡杯, 就会通过背部感受到它, 而且令人惊讶的是,盲人很擅长于 仅仅通过背部的一小块的感受 来确定相机前的东西。 如今,已经有许多基于上述原理的 现代的实例。 声波眼镜在你面前录像, 并将录像变成声波地形, 当物体靠近,远离, 听上去就像“嗞嗞嗞”的声音。 虽然听上去很刺耳, 但是几周之后, 盲人就开始很好地习惯 通过听觉信号, 来理解在他面前的事物。 而且不必通过耳朵: 这个系统使用前额的电触网格, 所以不管面前的录像是什么内容, 你都可以通过前额感应到。 为什么用前额? 因为它平时基本没什么用处。
The most modern incarnation is called the brainport, and this is a little electrogrid that sits on your tongue, and the video feed gets turned into these little electrotactile signals, and blind people get so good at using this that they can throw a ball into a basket, or they can navigate complex obstacle courses. They can come to see through their tongue. Now, that sounds completely insane, right? But remember, all vision ever is is electrochemical signals coursing around in your brain. Your brain doesn't know where the signals come from. It just figures out what to do with them.
最现代的例子是“brainport"视觉系统, 这是一个小的安置在舌头上的电网格, 视频源转变成小的电触信号, 盲人很擅长使用这个装置, 他们甚至能投篮, 或者通过的复杂障碍流程。 他们通过舌头就可以看见东西。 那听上去完全是荒谬的,对吧? 但是记住,所有看见的东西都是 流过大脑的电化学信号。 你的大脑不知道信号来自哪里。 它只负责做相应的处理。
So my interest in my lab is sensory substitution for the deaf, and this is a project I've undertaken with a graduate student in my lab, Scott Novich, who is spearheading this for his thesis. And here is what we wanted to do: we wanted to make it so that sound from the world gets converted in some way so that a deaf person can understand what is being said. And we wanted to do this, given the power and ubiquity of portable computing, we wanted to make sure that this would run on cell phones and tablets, and also we wanted to make this a wearable, something that you could wear under your clothing. So here's the concept. So as I'm speaking, my sound is getting captured by the tablet, and then it's getting mapped onto a vest that's covered in vibratory motors, just like the motors in your cell phone. So as I'm speaking, the sound is getting translated to a pattern of vibration on the vest. Now, this is not just conceptual: this tablet is transmitting Bluetooth, and I'm wearing the vest right now. So as I'm speaking -- (Applause) -- the sound is getting translated into dynamic patterns of vibration. I'm feeling the sonic world around me.
所以我的实验室的研究方向是 为听障人士寻找感官替代, 这是我和我的一个研究生 Scott Novich负责的一个项目, 这也是他毕业论文的主攻方向。 我们是这么计划的: 我们想要让来自外界的声音转变成 某种听障人士能够理解的信息。 考虑到移动设备的强大和普遍性, 我们决定这么做, 我们想要确定这项技术可以在 手机和平板电脑上运行, 我们还想把它设计成可穿戴设备, 可以穿戴在衣服里面。 给大家展示一下工作原理。 当我讲话时, 我的声音被平板电脑捕捉, 然后映射到带有震动马达的背心, 就像你手机里的驱动装置。 当我讲话时, 声音会转变为背心上的一种震动模式。 这已经不仅仅是概念了: 这个平板电脑正在发射蓝牙信号, 而且我正穿着这样的背心。 所以当我讲话时--(掌声)-- 声音就会被实时转变成 震动模式。 我正在感受周围的声波世界。
So, we've been testing this with deaf people now, and it turns out that after just a little bit of time, people can start feeling, they can start understanding the language of the vest.
现在我们正在听障人士身上进行测试, 结果表明,经过很短的时间, 人们就开始感觉到 他们能够开始理解 背心的语言了。
So this is Jonathan. He's 37 years old. He has a master's degree. He was born profoundly deaf, which means that there's a part of his umwelt that's unavailable to him. So we had Jonathan train with the vest for four days, two hours a day, and here he is on the fifth day.
这是Jonathan。 他37岁,拥有硕士学位。 他生来就严重失聪, 这意味着他无法感受一部分“环境”。 我们让Jonathan穿着 这个背心训练了4天,每天2小时, 这是第五天。
Scott Novich: You.
Scott Novich:“你”。
David Eagleman: So Scott says a word, Jonathan feels it on the vest, and he writes it on the board.
David Eagleman:Scott说了一个单词, Jonathan通过背心感受到了, 并把这个词写在了白板上。
SN: Where. Where.
SN:“哪里”。“哪里”。
DE: Jonathan is able to translate this complicated pattern of vibrations into an understanding of what's being said.
DE:Jonathan能够把这种复杂的震动模式, 翻译成他自己的理解。
SN: Touch. Touch.
SN:“触摸”。“触摸”。
DE: Now, he's not doing this -- (Applause) -- Jonathan is not doing this consciously, because the patterns are too complicated, but his brain is starting to unlock the pattern that allows it to figure out what the data mean, and our expectation is that, after wearing this for about three months, he will have a direct perceptual experience of hearing in the same way that when a blind person passes a finger over braille, the meaning comes directly off the page without any conscious intervention at all. Now, this technology has the potential to be a game-changer, because the only other solution for deafness is a cochlear implant, and that requires an invasive surgery. And this can be built for 40 times cheaper than a cochlear implant, which opens up this technology globally, even for the poorest countries.
DE:他并没有—— (掌声)-- Jonathan并没有刻意地去猜, 因为这个模式太复杂, 但是他的大脑正在解锁这个模式, 试图来理解 这些数据的意义, 我们的期望是, 他能在穿这个背心约三个月后, 有一个直接的听觉感受经验, 就像盲人用手指阅读盲文那样, 文字的意义直接来自纸上, 不需要任何有意识的干预。 这项技术拥有改变游戏规则的潜力, 因为唯一其他的帮助听障人士的方法 就是耳蜗植入, 而那需要进行侵害性的手术。 而这项技术比耳蜗植入便宜40倍, 它打开了全球技术市场, 甚至包括贫困国家。
Now, we've been very encouraged by our results with sensory substitution, but what we've been thinking a lot about is sensory addition. How could we use a technology like this to add a completely new kind of sense, to expand the human umvelt? For example, could we feed real-time data from the Internet directly into somebody's brain, and can they develop a direct perceptual experience?
现在我们受到了感官替代 实验结果的极大鼓舞, 但是我们一直在思考的是感官附加。 我们如何使用这样的技术来 增加一种全新的感官, 来拓展人类的“环境”? 例如,我们可以从网上得到实时数据, 直接反馈给大脑, 这些信息可以产生直接的 感知体验吗?
So here's an experiment we're doing in the lab. A subject is feeling a real-time streaming feed from the Net of data for five seconds. Then, two buttons appear, and he has to make a choice. He doesn't know what's going on. He makes a choice, and he gets feedback after one second. Now, here's the thing: The subject has no idea what all the patterns mean, but we're seeing if he gets better at figuring out which button to press. He doesn't know that what we're feeding is real-time data from the stock market, and he's making buy and sell decisions. (Laughter) And the feedback is telling him whether he did the right thing or not. And what we're seeing is, can we expand the human umvelt so that he comes to have, after several weeks, a direct perceptual experience of the economic movements of the planet. So we'll report on that later to see how well this goes. (Laughter)
这是我们实验室正在做的一个实验。 一名实验对象正在感受 数据网实时的信息流, 这种信息流会持续5秒。 然后屏幕上会出现两个按钮, 他必须选择一个。 他并不知道发生了什么。 他做出一个选择, 一秒后得到反馈。 是这样的: 实验对象并不知道所有模式的意义, 但是我们想要看看, 他是否能搞清楚应该按哪个按钮。 他不知道我们提供的信息 是来自股市的实时数据, 他要做的是买入和卖出的决策。 (笑声) 反馈会告诉他他的决定是否正确。 我们在观察的是, 我们是否能拓宽人类的“环境”, 以便他在几周之后能有 一个直接的关于全球经济活动的感知体验。 我们稍后会报导这个实验的进展。 (笑声)
Here's another thing we're doing: During the talks this morning, we've been automatically scraping Twitter for the TED2015 hashtag, and we've been doing an automated sentiment analysis, which means, are people using positive words or negative words or neutral? And while this has been going on, I have been feeling this, and so I am plugged in to the aggregate emotion of thousands of people in real time, and that's a new kind of human experience, because now I can know how everyone's doing and how much you're loving this. (Laughter) (Applause) It's a bigger experience than a human can normally have.
下面是我们做的另一个实验: 在今早的演讲中,我们一直在自动的刷取 TED2015主题标签的推特, 一直在做自动的情绪分析, 也就是:人们是用正面,负面还是中性的词 (来表达他们的感想)? 实验进行过程中, 我就能感受到, 这意味着我与数千人的实时情绪 汇总信息进行了对接, 那是一种全新的人类体验, 因为我能知道 每个人心情如何, 以及你们多么喜欢这个演讲。 (笑声)(掌声) 这比人类能够正常体验的 范围要大的多。
We're also expanding the umvelt of pilots. So in this case, the vest is streaming nine different measures from this quadcopter, so pitch and yaw and roll and orientation and heading, and that improves this pilot's ability to fly it. It's essentially like he's extending his skin up there, far away.
我们也在拓宽飞行员的“环境”。 在这个实例中, 背心可以分流来自这个直升机的 九种不同的测试方式, 倾斜,偏航,起伏,定向,前进, 这提高了飞行员的飞行能力。 就好像把他的皮肤延伸到了 很远的地方。
And that's just the beginning. What we're envisioning is taking a modern cockpit full of gauges and instead of trying to read the whole thing, you feel it. We live in a world of information now, and there is a difference between accessing big data and experiencing it.
而这仅仅是开头。 我们预想的是驾驶一个 遍布仪表的现代驾驶舱, 不用去目测那些数据,而是直接感受它。 我们生活在一个信息化的世界, 获得大数据和感受它 是截然不同的。
So I think there's really no end to the possibilities on the horizon for human expansion. Just imagine an astronaut being able to feel the overall health of the International Space Station, or, for that matter, having you feel the invisible states of your own health, like your blood sugar and the state of your microbiome, or having 360-degree vision or seeing in infrared or ultraviolet.
所以我认为拓宽人类的感官 拥有无尽的可能。 设想一个宇航员可以感受 整个国际空间站的健康状况, 或者,你可以直接感受 本不可见的健康状况, 如你的血糖,微生物状态, 或者拥有360度视角, 或者能看见红外或紫外线。
So the key is this: As we move into the future, we're going to increasingly be able to choose our own peripheral devices. We no longer have to wait for Mother Nature's sensory gifts on her timescales, but instead, like any good parent, she's given us the tools that we need to go out and define our own trajectory. So the question now is, how do you want to go out and experience your universe?
所以关键点是: 在我们步入未来的过程中, 我们逐渐能够选择自己的外围设备。 我们不必等待大自然母亲 按照她自己的时间尺度和节奏来 赐予我们感官的礼物。 相反,就像任何称职的家长, 她已经给了我们需要的工具, 让我们能够走出来 定义自己的人生轨迹。 所以现在的问题是, 你想要如何走出来,感受你的宇宙?
Thank you.
谢谢。
(Applause)
(掌声)
Chris Anderson: Can you feel it? DE: Yeah.
Chris Anderson:你能感受到吗? DE:当然。
Actually, this was the first time I felt applause on the vest. It's nice. It's like a massage. (Laughter)
说实话,这是我第一次 通过背心感受掌声。 太棒了。就像按摩。(笑声)
CA: Twitter's going crazy. Twitter's going mad. So that stock market experiment. This could be the first experiment that secures its funding forevermore, right, if successful?
CA:推特网友太疯狂了。 说一下那个股市的实验。 这可能是第一个能确保被 永久资助的实验, 对吗,如果能成功的话?
DE: Well, that's right, I wouldn't have to write to NIH anymore.
DE:对的,我不必再向NIH (美国国家卫生研究院)申请经费了。
CA: Well look, just to be skeptical for a minute, I mean, this is amazing, but isn't most of the evidence so far that sensory substitution works, not necessarily that sensory addition works? I mean, isn't it possible that the blind person can see through their tongue because the visual cortex is still there, ready to process, and that that is needed as part of it?
CA:我还是有些疑惑, 这的确很棒, 不过目前大部分实验证据是不是都表明 虽然感官替代能起作用, 却不一定代表感官附加能起作用? 我的意思是,盲人能通过舌头看东西, 是不是可能 因为视皮质还在那里, 时刻准备处理信息, 而且那就是其中必要的一部分?
DE: That's a great question. We actually have no idea what the theoretical limits are of what kind of data the brain can take in. The general story, though, is that it's extraordinarily flexible. So when a person goes blind, what we used to call their visual cortex gets taken over by other things, by touch, by hearing, by vocabulary. So what that tells us is that the cortex is kind of a one-trick pony. It just runs certain kinds of computations on things. And when we look around at things like braille, for example, people are getting information through bumps on their fingers. So I don't think we have any reason to think there's a theoretical limit that we know the edge of.
DE:这问题很好。 实际上我们也不知道 什么样的数据大脑才能吸收, 关于这个的理论局限是什么。 然而,总体上可以认为大脑非常灵活。 所以当一个人变盲之后, 唤醒视皮质的任务 会被其他东西接管, 如触觉,听觉,词汇。 这告诉我们皮质就像 只会一种把戏的小马驹。 它只是按某几种特定的计算方式运行。 例如,当我们看盲文的时候, 人们通过指尖下的凹凸获得信息。 因此,我不认为我们有任何理由 要相信我们的认知边缘 存在一个理论上的限制。
CA: If this checks out, you're going to be deluged. There are so many possible applications for this. Are you ready for this? What are you most excited about, the direction it might go? DE: I mean, I think there's a lot of applications here. In terms of beyond sensory substitution, the things I started mentioning about astronauts on the space station, they spend a lot of their time monitoring things, and they could instead just get what's going on, because what this is really good for is multidimensional data. The key is this: Our visual systems are good at detecting blobs and edges, but they're really bad at what our world has become, which is screens with lots and lots of data. We have to crawl that with our attentional systems. So this is a way of just feeling the state of something, just like the way you know the state of your body as you're standing around. So I think heavy machinery, safety, feeling the state of a factory, of your equipment, that's one place it'll go right away.
CA:如果这个技术实现了, 你肯定会一夜爆红。 这项技术有太多潜在的应用了。 你准备好了吗?最让你兴奋的是什么? 是未来的方向吗? DE:我认为这可以有许多应用。 除了我开始提到的感官替代, 关于空间站的宇航员, 他们花了很多时间 监测各种东西, 相反他们可以直接知道进展, 因为这有利于获得多维数据。 关键点是:我们的视觉系统 善于探测障碍和边缘, 但是它们并不擅长观察我们目前的世界, 一个充满大量数据的屏幕的世界。 我们得用注意力系统匍匐前进。 这是感知事物状态的一种方法, 就像当你站立时你知道 自己身体状态的方法一样。 所以我认为重型机械、安全机制, 了解一个工厂的状态, 了解你的设备的状态, 这就是这项技术即将要实现的。
CA: David Eagleman, that was one mind-blowing talk. Thank you very much.
CA:David Eagleman,这真是一个 激动人心的演讲。非常感谢。
DE: Thank you, Chris. (Applause)
DE:谢谢你,Chris。 (掌声)