Vi er lavet af meget små byggesten og vi er indlejret i et meget stort kosmos og faktisk er vi er ikke gode til at forstå virkeligheden ud fra disse to niveauer og det er fordi vores hjerner ikke har udviklet sig til at forstå verden på disse niveauer.
We are built out of very small stuff, and we are embedded in a very large cosmos, and the fact is that we are not very good at understanding reality at either of those scales, and that's because our brains haven't evolved to understand the world at that scale.
Istedet er vi fanget på en meget tynd skive af opfattelse lige i midten. Men det mærkelige er, at selv på denne skive af virkelighed som vi kender ser vi ikke det meste af det som foregår. Tag f.eks. farverne i vores verden. De er lysbølger, elektromagnetisk stråling som tilbagekastes fra objekter og som rammer special-receptorer bagerst i vore øjne. Men vi ser ikke alle bølgerne der er derude. Faktisk opfanger vi mindre end en 10 trilliondel af hvad der findes derude. Så du har radiobølger, mikrobølger røntgenstråler og gammastråler flyvende gennem din krop lige nu og du er slet ikke klar over det fordi du blev ikke født med de biologiske receptorer som kan opfange dem. Der er tusindvis af mobiltelefonsamtaler der fyger gennem dig lige nu og du kan slet ikke mærke dem.
Instead, we're trapped on this very thin slice of perception right in the middle. But it gets strange, because even at that slice of reality that we call home, we're not seeing most of the action that's going on. So take the colors of our world. This is light waves, electromagnetic radiation that bounces off objects and it hits specialized receptors in the back of our eyes. But we're not seeing all the waves out there. In fact, what we see is less than a 10 trillionth of what's out there. So you have radio waves and microwaves and X-rays and gamma rays passing through your body right now and you're completely unaware of it, because you don't come with the proper biological receptors for picking it up. There are thousands of cell phone conversations passing through you right now, and you're utterly blind to it.
Det er ikke fordi at disse stråler af sagens natur er umærkbare. Slangers verden er til dels præget af infrarødt lys og honningbiers verden inkluderer ultraviolet lys og på instrumentbrættet i vore biler har vi bygget maskiner der kan opfange signaler fra radiofrekvens-området og på hospitaler har vi maskiner der kan opfange røntgenstråler. Men du kan ikke mærke nogen af disse af dig selv ihvertfald ikke endnu fordi du er ikke udstyret med de rigtige sensorer.
Now, it's not that these things are inherently unseeable. Snakes include some infrared in their reality, and honeybees include ultraviolet in their view of the world, and of course we build machines in the dashboards of our cars to pick up on signals in the radio frequency range, and we built machines in hospitals to pick up on the X-ray range. But you can't sense any of those by yourself, at least not yet, because you don't come equipped with the proper sensors.
Hvad dette betyder er, at vores oplevelse af virkeligheden er afgrænset af vores biologi og det går imod den gængse opfattelse at vore øjne, ører og fingerspidser kun opfanger den objektive virkelighed der er. Istedet opfanger vore hjerner kun en meget lille del af verden.
Now, what this means is that our experience of reality is constrained by our biology, and that goes against the common sense notion that our eyes and our ears and our fingertips are just picking up the objective reality that's out there. Instead, our brains are sampling just a little bit of the world.
Overalt i dyreriget er der forskellige dyr som opfanger forskellige dele af virkeligheden. I en tæges blinde og døve verden er de vigtige signaler temperatur og smørsyre hos en spøgelses-knivfisk er verden overdådigt præget af elektriske impulser og for den ekko-lokaliserende flagermus er verden konstrueret af luft-trykbølger. Disse dyr opfanger dét stykke af deres økosystem og videnskaben har et ord for dette. Det kaldes for umwelt, som er det tyske ord for den omgivende verden. Formentlig antager ethvert dyr at dets umwelt udgør hele den objektive virkelighed der findes derude fordi hvorfor ville man nogensinde forestille sig at der findes noget udover hvad man kan sanse. Istedet, hvad vi alle gør er at acceptere virkeligheden som den bliver præsenteret for os.
Now, across the animal kingdom, different animals pick up on different parts of reality. So in the blind and deaf world of the tick, the important signals are temperature and butyric acid; in the world of the black ghost knifefish, its sensory world is lavishly colored by electrical fields; and for the echolocating bat, its reality is constructed out of air compression waves. That's the slice of their ecosystem that they can pick up on, and we have a word for this in science. It's called the umwelt, which is the German word for the surrounding world. Now, presumably, every animal assumes that its umwelt is the entire objective reality out there, because why would you ever stop to imagine that there's something beyond what we can sense. Instead, what we all do is we accept reality as it's presented to us.
Lad os lave et bevidstheds-løft omkring dette. Forestil dig, at du er en blodhund. Hele din verden handler om lugt. Du har en lang snude, indbygget med 200 mio. lugt-receptorer du har våde næsebor som tiltrækker og opfanger lugtmolekyler og dine næsebor har endda slidser, så du kan indånde ekstra meget luft. Alt handler om lugt for dig. Så en dag bliver du ramt af en åbenbaring. Du kigger på din menneskelige ejer, og tænker: "Hvordan er det at have den ynkelige, forarmede menneskelige næse? (Latter) Hvordan er det at indånde en svag lille næsefuld af luft? Hvordan kan du ikke vide, at der er en kat 100 meter borte eller at din nabo stod lige præcis her for seks timer siden?" (Latter)
Let's do a consciousness-raiser on this. Imagine that you are a bloodhound dog. Your whole world is about smelling. You've got a long snout that has 200 million scent receptors in it, and you have wet nostrils that attract and trap scent molecules, and your nostrils even have slits so you can take big nosefuls of air. Everything is about smell for you. So one day, you stop in your tracks with a revelation. You look at your human owner and you think, "What is it like to have the pitiful, impoverished nose of a human? (Laughter) What is it like when you take a feeble little noseful of air? How can you not know that there's a cat 100 yards away, or that your neighbor was on this very spot six hours ago?" (Laughter)
Så fordi vi er mennesker har vi aldrig oplevet den verden af lugt så vi savner den ikke for vi er fast indgroet i vores egen umwelt. Men spørgsmålet er, behøver vi at blive der? Som neurolog er jeg interesseret i måden vores teknologi kan udvide vores umwelt og hvordan den kan forandre den menneskelige oplevelse.
So because we're humans, we've never experienced that world of smell, so we don't miss it, because we are firmly settled into our umwelt. But the question is, do we have to be stuck there? So as a neuroscientist, I'm interested in the way that technology might expand our umwelt, and how that's going to change the experience of being human.
Vi ved allerede, at vi kan sammenføje teknologien med vor biologi for der er hundrede-tusindvis af mennesker, der lever med kunstig hørelse og kunstig vision. Måden det fungerer på, er at man tager en mikrofon og digitaliserer signalet og man putter en elektrode strimmel direkte ind i det inderste øre. Med retina implantatet tager man et kamera digitaliserer signalet, og putter et elektrode gitter direkte ind i den optiske nerve. Så nyligt som for 15 år siden var der mange videnskabsfolk som ikke troede at disse teknologier virkede. Hvorfor? Fordi de hører hjemme i Silicon Valley og derfor er de ikke særlig kompatible med vor naturlige biologiske sanseorganer. Men faktum er at de virker hjernen finder ud af hvordan disse signaler virker.
So we already know that we can marry our technology to our biology, because there are hundreds of thousands of people walking around with artificial hearing and artificial vision. So the way this works is, you take a microphone and you digitize the signal, and you put an electrode strip directly into the inner ear. Or, with the retinal implant, you take a camera and you digitize the signal, and then you plug an electrode grid directly into the optic nerve. And as recently as 15 years ago, there were a lot of scientists who thought these technologies wouldn't work. Why? It's because these technologies speak the language of Silicon Valley, and it's not exactly the same dialect as our natural biological sense organs. But the fact is that it works; the brain figures out how to use the signals just fine.
Hvordan skal det forstås? Her er den store hemmelighed: Din hjerne hverken hører eller ser noget som helst af alt dette. Din hjerne er indesluttet i en hvælving af stilhed og mørke i dit kranie. Alt, den nogensinde ser er elektrokemiske signaler som kommer ind ad forskellige datakabler og det er alt som den nogensinde har at arbejde med. Så utroligt nok er hjernen rigtig god til at fange disse signaler, udvinde mønstre og tildele dem mening den tager dit indre kosmos og laver en historie af dette, din subjektive verden.
Now, how do we understand that? Well, here's the big secret: Your brain is not hearing or seeing any of this. Your brain is locked in a vault of silence and darkness inside your skull. All it ever sees are electrochemical signals that come in along different data cables, and this is all it has to work with, and nothing more. Now, amazingly, the brain is really good at taking in these signals and extracting patterns and assigning meaning, so that it takes this inner cosmos and puts together a story of this, your subjective world.
Her er nøglen: Din hjerne ved ikke - og er ligeglad med - hvor disse data kommer fra. Ligemeget hvilke data kommer ind, finder den ud af hvad den skal med dem. Hjernen er en meget effektiv maskine. Den er egentlig en computerenhed til al slags generelle formål og den indtager alt og finder ud af hvad den skal gøre med al data og det frigør Moder Natur til at eksperimentere med forskellige slags indgangskanaler.
But here's the key point: Your brain doesn't know, and it doesn't care, where it gets the data from. Whatever information comes in, it just figures out what to do with it. And this is a very efficient kind of machine. It's essentially a general purpose computing device, and it just takes in everything and figures out what it's going to do with it, and that, I think, frees up Mother Nature to tinker around with different sorts of input channels.
Jeg kalder denne evolutionsmodel for K.H. og jeg vil ikke være for teknisk her men K.H. står for Kartoffel Hoved og dette navn bruges til at understrege at alle disse sensorer som vi kender og elsker, øjne, ører og fingerspidser egentlig kun er perifere plug-and-play enheder Man putter dem ind og så er man klar. Hjernen finder ud af hvad den skal med alle data der indkommer. Og når man kigger over hele dyreriget finder man mange perifere enheder. Slanger har varme-gruber, med hvilke de opfanger infrarødt lys spøgelses-knivfisken har elektroreceptorer stjernenæse-muldvarpen har en næse udstyret med 22 fingre med hvilke den mærker og konstruerer en 3D model af verden mange fugle har indbygget magnetit, så de kan orientere sig til planetens magnetiske felt. Hvad dette betyder er, at naturen behøver ikke konstant at ombygge hjernen. Istedet, nu da de principielle hjernefunktioner er afklaret, behøver naturen kun at bygge nye perifere enheder.
So I call this the P.H. model of evolution, and I don't want to get too technical here, but P.H. stands for Potato Head, and I use this name to emphasize that all these sensors that we know and love, like our eyes and our ears and our fingertips, these are merely peripheral plug-and-play devices: You stick them in, and you're good to go. The brain figures out what to do with the data that comes in. And when you look across the animal kingdom, you find lots of peripheral devices. So snakes have heat pits with which to detect infrared, and the ghost knifefish has electroreceptors, and the star-nosed mole has this appendage with 22 fingers on it with which it feels around and constructs a 3D model of the world, and many birds have magnetite so they can orient to the magnetic field of the planet. So what this means is that nature doesn't have to continually redesign the brain. Instead, with the principles of brain operation established, all nature has to worry about is designing new peripherals.
Så hvad dét betyder er dette: Lektien, som vi har at lære er, at der er intet rigtig specielt eller fundamentalt omkring biologien som vi har idag. Den er kun, hvad vi har arvet efter en kompleks rejse af evolution. Men vi behøver ikke kun at holde os til den og vor bedste bevis omkring det kommer fra, hvad kaldes sensorisk substitution. Begrebet referer til at give input til hjernen ad usædvanlige sensoriske kanaler hvor hjernen blot finder ud af, hvad den skal.
Okay. So what this means is this: The lesson that surfaces is that there's nothing really special or fundamental about the biology that we come to the table with. It's just what we have inherited from a complex road of evolution. But it's not what we have to stick with, and our best proof of principle of this comes from what's called sensory substitution. And that refers to feeding information into the brain via unusual sensory channels, and the brain just figures out what to do with it.
Det lyder måske spekulativt men den første afhandling om det blev udgivet i Nature i 1969. En videnskabsmand ved navn Paul Bach-y-Rita satte blinde mennesker i en modificeret tandlægestol opsatte et videokamera og når et objekt blev sat foran kameraet kunne personen i stolen mærke objektet puffe sig i ryggen via et gitter af magnetventiler. Hvis du bevæger en kaffekop foran kameraet kan du mærke den i ryggen og utroligt nok blev blinde mennesker ret gode til at fastslå hvad der var foran kameraet kun ved at mærke objektet i lænden. Der er flere moderne inkarnationer af dette forsøg. Soniske briller optager en videofilm lige foran dig laver den om til et sonisk landskab så når ting kommer nærmere eller fjerner sig lyder det sådan "Bzz, bzz, bzz." Det lyder som en kakofoni men efter flere uger bliver blinde mennesker ret gode til at forstå hvad der er foran dem kun baseret på deres hørelse. Det behøver ikke at foregå via ørerne systemet bruger et elektrotaktilsk gitter på panden hvad der er foran kameraet mærkes på panden. Hvorfor panden? For du bruger den ikke til noget.
Now, that might sound speculative, but the first paper demonstrating this was published in the journal Nature in 1969. So a scientist named Paul Bach-y-Rita put blind people in a modified dental chair, and he set up a video feed, and he put something in front of the camera, and then you would feel that poked into your back with a grid of solenoids. So if you wiggle a coffee cup in front of the camera, you're feeling that in your back, and amazingly, blind people got pretty good at being able to determine what was in front of the camera just by feeling it in the small of their back. Now, there have been many modern incarnations of this. The sonic glasses take a video feed right in front of you and turn that into a sonic landscape, so as things move around, and get closer and farther, it sounds like "Bzz, bzz, bzz." It sounds like a cacophony, but after several weeks, blind people start getting pretty good at understanding what's in front of them just based on what they're hearing. And it doesn't have to be through the ears: this system uses an electrotactile grid on the forehead, so whatever's in front of the video feed, you're feeling it on your forehead. Why the forehead? Because you're not using it for much else.
Den mest moderne inkarnation kaldes hjerneporten, et lille elektrodegitter på tungen hvor filmen ændres til små elektrotaktilske signaler. Blinde mennesker bliver så gode til at bruge det at de kan kaste en bold i mål eller kan navigere gennem indviklede forhindringsbaner. De kan komme til at se gennem deres tunge. Det lyder ret vanvittigt, ikke? Husk, at vision kun er elektrokemiske signaler, der flyver rundt i din hjerne. Din hjerne ved ikke hvorfra signalerne kommer. Den finder blot ud af, hvad den skal med dem.
The most modern incarnation is called the brainport, and this is a little electrogrid that sits on your tongue, and the video feed gets turned into these little electrotactile signals, and blind people get so good at using this that they can throw a ball into a basket, or they can navigate complex obstacle courses. They can come to see through their tongue. Now, that sounds completely insane, right? But remember, all vision ever is is electrochemical signals coursing around in your brain. Your brain doesn't know where the signals come from. It just figures out what to do with them.
Hovedinteressen i mit laboratorie er sensorisk substitution for døve hvilket er et projekt jeg er begyndt på med en kandidatstuderende i laboratoriet, Scott Novich som banebryder for området med sin afhandling. Her er, hvad vi ønsker at gøre: vi ønsker at gøre det muligt at konvertere lyd fra verden så den døve kan forstå, hvad der bliver sagt. Igennem de allestedsnærværende computere ville vi udvikle teknologi til mobiltelefoner og -tavler samt at sikre at man kunne have den på sig altid, at gøre den bærbar under tøjet. Her er idéen. Talen bliver optaget af en tavle og bliver kortlagt på en vest, dækket med vibrerende motorer præcis som motorerne i din mobiltelefon. Så mens jeg taler, bliver lyden omdannet til vibrationer på vesten. Det er ikke teori: tavlen sender Bluetooth og jeg er iført vesten. Mens jeg taler -- (Bifald) -- bliver lyden omdannet til mønstre af vibrationer. Jeg mærker den soniske verden omkring mig.
So my interest in my lab is sensory substitution for the deaf, and this is a project I've undertaken with a graduate student in my lab, Scott Novich, who is spearheading this for his thesis. And here is what we wanted to do: we wanted to make it so that sound from the world gets converted in some way so that a deaf person can understand what is being said. And we wanted to do this, given the power and ubiquity of portable computing, we wanted to make sure that this would run on cell phones and tablets, and also we wanted to make this a wearable, something that you could wear under your clothing. So here's the concept. So as I'm speaking, my sound is getting captured by the tablet, and then it's getting mapped onto a vest that's covered in vibratory motors, just like the motors in your cell phone. So as I'm speaking, the sound is getting translated to a pattern of vibration on the vest. Now, this is not just conceptual: this tablet is transmitting Bluetooth, and I'm wearing the vest right now. So as I'm speaking -- (Applause) -- the sound is getting translated into dynamic patterns of vibration. I'm feeling the sonic world around me.
Vi har testet det med døve mennesker nu og det viser sig at efter kun et lille stykke tid kan mennesker mærke og begynde at forstå sproget, vesten taler.
So, we've been testing this with deaf people now, and it turns out that after just a little bit of time, people can start feeling, they can start understanding the language of the vest.
Her er Jonathan, 37 år. Han er uddannet kandidat. Han blev født helt døv så en del af hans umwelt er utilgængelig for ham. Jonathan øvede med vesten to timer i fire dage og her er han den femte dag.
So this is Jonathan. He's 37 years old. He has a master's degree. He was born profoundly deaf, which means that there's a part of his umwelt that's unavailable to him. So we had Jonathan train with the vest for four days, two hours a day, and here he is on the fifth day.
Scott Novich: Du.
Scott Novich: You.
David Eagleman: Scott siger et ord, Jonathan mærker det i vesten og han skriver det på tavlen.
David Eagleman: So Scott says a word, Jonathan feels it on the vest, and he writes it on the board.
SN: Hvor. Hvor.
SN: Where. Where.
DE: Jonathan er i stand til at tolke de indviklede vibrationer til at forstå hvad der bliver sagt.
DE: Jonathan is able to translate this complicated pattern of vibrations into an understanding of what's being said.
SN: Kærtegn. Kærtegn.
SN: Touch. Touch.
DN: Nuvel, han gør dette ikke -- (Bifald) -- Jonathan er ikke bevidst om hvad han gør, for mønstret er indviklet men hans hjerne begynder at afkode mønstret, og begynder at forstå hvad informationen betyder og vor forventning er at efter han har båret vesten i tre måneder vil han opleve en direkte perceptuel hørelse på samme måde som når en blind person læser med fingrene meningen mærkes på siden uden nogen bevidst indblanding. Denne teknologi har potentialet for at være bane-brydende for den eneste løsning for døvhed er et cochlear implantat hvilket kræver en omfattende operation. Og vesten kan fremstilles 40 gange billigere end et implantat som giver globale muligheder, selv for de fattigste lande.
DE: Now, he's not doing this -- (Applause) -- Jonathan is not doing this consciously, because the patterns are too complicated, but his brain is starting to unlock the pattern that allows it to figure out what the data mean, and our expectation is that, after wearing this for about three months, he will have a direct perceptual experience of hearing in the same way that when a blind person passes a finger over braille, the meaning comes directly off the page without any conscious intervention at all. Now, this technology has the potential to be a game-changer, because the only other solution for deafness is a cochlear implant, and that requires an invasive surgery. And this can be built for 40 times cheaper than a cochlear implant, which opens up this technology globally, even for the poorest countries.
Vi ser meget positive resultater med sensorisk substitution men vi har tænkt meget på sensoriske tilføjelser. Hvordan kan man bruge denne teknologi til at tilføje en helt ny slags sans til at ekspandere vor egen umwelt? F.eks. kan vi fodre real-time data fra nettet direkte ind i en persons hjerne hvorefter de kan udvikle en direkte perceptuel oplevelse?
Now, we've been very encouraged by our results with sensory substitution, but what we've been thinking a lot about is sensory addition. How could we use a technology like this to add a completely new kind of sense, to expand the human umvelt? For example, could we feed real-time data from the Internet directly into somebody's brain, and can they develop a direct perceptual experience?
Et eksperiment vi laver i laboratoriet: Personen føler real-time data fra nettet i vesten i fem sekunder. Der dukker to knapper op, og han må foretage et valg. Han vælger og får svar efter et sekund. Det interessante er, han ved ikke hvad mønstrene betyder men vi undersøger om han med tiden gør de rigtige valg. Han ved ikke at mønstrene er real-time data fra aktiemarkedet og han vælger køb eller salg. (Latter) Feedback fortæller ham om han gjorde det rigtige valg. Kan vi udvide den menneskelige umwelt så at han efter flere uger kan have en sanselig oplevelse af planetens økonomiske bevægelser. Vi lader høre fra os om hvordan det går. (Latter)
So here's an experiment we're doing in the lab. A subject is feeling a real-time streaming feed from the Net of data for five seconds. Then, two buttons appear, and he has to make a choice. He doesn't know what's going on. He makes a choice, and he gets feedback after one second. Now, here's the thing: The subject has no idea what all the patterns mean, but we're seeing if he gets better at figuring out which button to press. He doesn't know that what we're feeding is real-time data from the stock market, and he's making buy and sell decisions. (Laughter) And the feedback is telling him whether he did the right thing or not. And what we're seeing is, can we expand the human umvelt so that he comes to have, after several weeks, a direct perceptual experience of the economic movements of the planet. So we'll report on that later to see how well this goes. (Laughter)
Her er et andet forsøg vi er i gang med: Under foredragene i dag har vi undersøgt Twitter for TED2015 hashtag´et og vi lavet en automatisk følelsesanalyse dvs. bruger folk positive, negative eller neutrale ord? Mens dette er sket har jeg mærket det jeg er tilsluttet real-time følelserne hos tusindvis af mennesker og det er en ny menneskelig oplevelse, for nu kan jeg mærke hvordan alle har det og hvor meget I kan lide det her. (Latter) (Bifald) Det er en større oplevelse end et menneske normalt kan have.
Here's another thing we're doing: During the talks this morning, we've been automatically scraping Twitter for the TED2015 hashtag, and we've been doing an automated sentiment analysis, which means, are people using positive words or negative words or neutral? And while this has been going on, I have been feeling this, and so I am plugged in to the aggregate emotion of thousands of people in real time, and that's a new kind of human experience, because now I can know how everyone's doing and how much you're loving this. (Laughter) (Applause) It's a bigger experience than a human can normally have.
Vi udvider også umwelt for piloter. Vesten streamer ni forskellige målinger fra en Quadrokopter såsom stejlhed, orientering og retning hvilket forbedrer pilotens ydeevne. Han mærker alt det på sin hud.
We're also expanding the umvelt of pilots. So in this case, the vest is streaming nine different measures from this quadcopter, so pitch and yaw and roll and orientation and heading, and that improves this pilot's ability to fly it. It's essentially like he's extending his skin up there, far away.
Og det er kun begyndelsen. Visionen er at tage et moderne cockpit og mærke det istedet for at læse det. Vi lever i informationstiden nu og der er forskel på at have adgang til massive data og opleve dem.
And that's just the beginning. What we're envisioning is taking a modern cockpit full of gauges and instead of trying to read the whole thing, you feel it. We live in a world of information now, and there is a difference between accessing big data and experiencing it.
Der er utallige muligheder i fremtiden for menneskelig ekspansion. En astronaut kunne f.eks. mærke tilstanden hos den internationale rumstation eller du kunne mærke din egen sundhedstilstand såsom blodsukker og mikrobiologisk tilstand eller kunne se 360 grader, eller infrarødt og ultraviolet.
So I think there's really no end to the possibilities on the horizon for human expansion. Just imagine an astronaut being able to feel the overall health of the International Space Station, or, for that matter, having you feel the invisible states of your own health, like your blood sugar and the state of your microbiome, or having 360-degree vision or seeing in infrared or ultraviolet.
Kernen af sagen er: I fremtiden vil vi være mere og mere i stand til at vælge perifere enheder. Det vil ikke være nødvendigt at vente på Moder Naturs sensoriske gaver efter hendes tidsskala istedet har hun, som enhver god forælder, givet os værktøjet til at gå ud og definere vores egen retning. Så spørgsmålet er nu hvordan vil du gå ud og opleve dit univers?
So the key is this: As we move into the future, we're going to increasingly be able to choose our own peripheral devices. We no longer have to wait for Mother Nature's sensory gifts on her timescales, but instead, like any good parent, she's given us the tools that we need to go out and define our own trajectory. So the question now is, how do you want to go out and experience your universe?
Tak.
Thank you.
(Bifald)
(Applause)
Chris Anderson: Kan du mærke det? DE: Ja.
Chris Anderson: Can you feel it? DE: Yeah.
Det er første bifald jeg mærker i vesten. Dejligt, som en massage. (Latter)
Actually, this was the first time I felt applause on the vest. It's nice. It's like a massage. (Laughter)
CA: Twitter flipper helt ud. Aktiemarkeds-eksperimentet kan være første som evigt garanterer sin egen bevilling, ikke?
CA: Twitter's going crazy. Twitter's going mad. So that stock market experiment. This could be the first experiment that secures its funding forevermore, right, if successful?
DE: Ja. Jeg ville ikke behøve at skrive NIH mere.
DE: Well, that's right, I wouldn't have to write to NIH anymore.
CA: For at være en smule skeptisk det er utroligt, men er beviserne hidtil ikke at sensorisk substitution virker men sensorisk udvidelse virker teoretisk set? Er det ikke muligt at den blinde person kan se gennem tungen fordi det visuelle cortex er der, klar til at processere og det er fordi det virker?
CA: Well look, just to be skeptical for a minute, I mean, this is amazing, but isn't most of the evidence so far that sensory substitution works, not necessarily that sensory addition works? I mean, isn't it possible that the blind person can see through their tongue because the visual cortex is still there, ready to process, and that that is needed as part of it?
DE: Godt spørgsmål. Vi aner faktiskt ikke hvad for teoretiske grænser hjernen har til at modtage data. Men generelt er den dog særdeles fleksibel. Når vi blindes vil visuelle cortex overtages af andre sanser: hørelse, ord, følesans. Cortex er derfor en slags et-trick pony. Den har bestemte fremgangsmåder at processere ting på. Hvis man f.eks. tager blindskrift får folk informationer gennem fingrene. Så jeg tror ikke at vi har grund til at tænke på teoretiske grænser for hvad vi kan opnå.
DE: That's a great question. We actually have no idea what the theoretical limits are of what kind of data the brain can take in. The general story, though, is that it's extraordinarily flexible. So when a person goes blind, what we used to call their visual cortex gets taken over by other things, by touch, by hearing, by vocabulary. So what that tells us is that the cortex is kind of a one-trick pony. It just runs certain kinds of computations on things. And when we look around at things like braille, for example, people are getting information through bumps on their fingers. So I don't think we have any reason to think there's a theoretical limit that we know the edge of.
CA: Hvis dette holder, kommer I til at blive oversvømmet. Der er så mange måder det kan anvendes på. Er I klar til det? Hvad for en retning er I mest spændt på det kan tage? DE: Der er mange anvendelser. F.eks. med rumstationen, i forhold til sensorisk substitution bruger astronauter meget tid på at holde øje med status, når de bare kunne mærke den, det er godt for multidimensionale data. Vor visualsystem kan se klatter og kanter men kan ikke døje hvad hvor verden er idag, som skærme og masser af data. Vi må bevidst give dem vor opmærksomhed. Her er en måde at mærke status af noget ligesom du mærker din krops status kun ved at være. Først og fremmest vil det anvendes industrielt til sikkerhed, at mærke status i værktøj osv.
CA: If this checks out, you're going to be deluged. There are so many possible applications for this. Are you ready for this? What are you most excited about, the direction it might go? DE: I mean, I think there's a lot of applications here. In terms of beyond sensory substitution, the things I started mentioning about astronauts on the space station, they spend a lot of their time monitoring things, and they could instead just get what's going on, because what this is really good for is multidimensional data. The key is this: Our visual systems are good at detecting blobs and edges, but they're really bad at what our world has become, which is screens with lots and lots of data. We have to crawl that with our attentional systems. So this is a way of just feeling the state of something, just like the way you know the state of your body as you're standing around. So I think heavy machinery, safety, feeling the state of a factory, of your equipment, that's one place it'll go right away.
CA: David Eagleman, et utroligt foredrag. Mange tak.
CA: David Eagleman, that was one mind-blowing talk. Thank you very much.
DE: Tak, Chris. (Bifald)
DE: Thank you, Chris. (Applause)