Today's computers are so amazing that we fail to notice how terrible they really are. I'd like to talk to you today about this problem, and how we can fix it with neuroscience.
First, I'd like to take you back to a frosty night in Harlem in 2011 that had a profound impact on me. I was sitting in a dive bar outside of Columbia University, where I studied computer science and neuroscience, and I was having this great conversation with a fellow student about the power of holograms to one day replace computers. And just as we were getting to the best part of the conversation, of course, his phone lights up. And he pulls it towards himself, and he looks down and he starts typing. And then he forces his eyeballs back up to mine and he goes, "Keep going. I'm with you." But of course his eyes were glazed over, and the moment was dead.
Meanwhile across the bar, I noticed another student holding his phone, this time towards a group. He was swiping through pictures on Instagram, and these kids were laughing hysterically. And that dichotomy between how crappy I was feeling and how happy they were feeling about the same technology, really got me thinking. And the more I thought of it, the more I realized it was clearly not the digital information that was the bad guy here, it was simply the display position that was separating me from my friend and that was binding those kids together.
See, they were connected around something, just like our ancestors who evolved their social cognitions telling stories around the campfire. And that's exactly what tools should do, I think. They should extend our bodies. And I think computers today are doing quite the opposite. Whether you're sending an email to your wife or you're composing a symphony or just consoling a friend, you're doing it in pretty much the same way. You're hunched over these rectangles, fumbling with buttons and menus and more rectangles. And I think this is the wrong way, I think we can start using a much more natural machine. We should use machines that bring our work back into the world. We should use machines that use the principles of neuroscience to extend our senses versus going against them.
Now it just so happens that I have such a machine here. It's called the Meta 2. Let's try it out. Now in front of me right now, I can see the audience, and I can see my very hands. And in three, two, one, we're going to see an immersive hologram appear, a very realistic hologram appear in front of me, of our very glasses I'm wearing on my head right now. And of course this could be anything that we're shopping for or learning from, and I can use my hands to very nicely kind of move it around with fine control. And I think Iron Man would be proud. We're going to come back to this in just a bit.
(Applause)
Now if you're anything like me, your mind is already reeling with the possibilities of what we can do with this kind of technology, so let's look at a few.
My mom is an architect, so naturally the first thing I imagined was laying out a building in 3D space instead of having to use these 2D floor plans. She's actually touching graphics right now and selecting an interior decor. This was all shot through a GoPro through our very glasses.
And this next use case is very personal to me, it's Professor Adam Gazzaley's glass brain project, courtesy of UCSF. As a neuroscience student, I would always fantasize about the ability to learn and memorize these complex brain structures with an actual machine, where I could touch and play with the various brain structures.
Now what you're seeing is called augmented reality, but to me, it's part of a much more important story -- a story of how we can begin to extend our bodies with digital devices, instead of the other way around.
Now ... in the next few years, humanity's going to go through a shift, I think. We're going to start putting an entire layer of digital information on the real world. Just imagine for a moment what this could mean for storytellers, for painters, for brain surgeons, for interior decorators and maybe for all of us here today. And what I think we need to do as a community, is really try and make an effort to imagine how we can create this new reality in a way that extends the human experience, instead of gamifying our reality or cluttering it with digital information. And that's what I'm very passionate about.
Now, I want to tell you a little secret. In about five years -- this is not the smallest device -- in about five years, these are all going to look like strips of glass on our eyes that project holograms. And just like we don't care so much about which phone we buy in terms of the hardware -- we buy it for the operating system -- as a neuroscientist, I always dreamt of building the iOS of the mind, if you will. And it's very, very important that we get this right, because we might be living inside of these things for at least as long as we've lived with the Windows graphical user interface. And I don't know about you, but living inside of Windows scares me.
(Laughter)
To isolate the single most intuitive interface out of infinity, we use neuroscience to drive our design guidelines, instead of letting a bunch of designers fight it out in the boardroom. And the principle we all revolve around is what's called the "Neural Path of Least Resistance."
At every turn, we're connecting the iOS of the brain with our brain on, for the first time, our brain's terms. In other words, we're trying to create a zero learning-curve computer. We're building a system that you've always known how to use.
Here are the first three design guidelines that we employ in this brand-new form of user experience. First and foremost, you are the operating system. Traditional file systems are complex and abstract, and they take your brain extra steps to decode them. We're going against the Neural Path of Least Resistance. Meanwhile, in augmented reality, you can of course place your holographic TED panel over here, and your holographic email on the other side of the desk, and your spatial memory evolved just fine to go ahead and retrieve them. You could put your holographic Tesla that you're shopping for -- or whatever model my legal team told me to put in right before the show.
(Laughter)
Perfect. And your brain knows exactly how to get it back.
The second interface guideline we call "touch to see." What do babies do when they see something that grabs their interest? They try and reach out and touch it. And that's exactly how the natural machine should work as well. Turns out the visual system gets a fundamental boost from a sense we call proprioception -- that's the sense of our body parts in space. So by touching our work directly, we're not only going to control it better, we're also going to understand it much more deeply. Hence, touch to see.
But it's not enough to experience things ourselves. We're inherently these social primates. And this leads me to our third guideline, the holographic campfire from our first story.
Our mirror-neuron subsystem suggests that we can connect with each other and with our work much better if we can see each other's faces and hands in 3D. So if you look at the video behind me, you can see two Meta users playing around with the same hologram, making eye contact, connected around this thing, instead of being distracted by external devices.
Let's go ahead and try this again with neuroscience in mind. So again, our favorite interface, the iOS of the mind. I'm going to now take a step further and go ahead and grab this pair of glasses and leave it right here by the desk. I'm now with you, I'm in the moment, we're connecting. My spatial memory kicks in, and I can go ahead and grab it and bring it right back here, reminding me that I am the operating system. And now my proprioception is working, and I can go ahead and explode these glasses into a thousand parts and touch the very sensor that is currently scanning my hand.
But it's not enough to see things alone, so in a second, my co-founder Ray is going to make a 3D call -- Ray?
(Ringing)
Hey Ray, how's it going? Guys, I can see this guy in front me in full 3D. And he is photo-realistic.
(Applause)
Thank you.
My mirror-neuron subsystem suggests that this is going to replace phones in not too long. Ray, how's it going?
Ray: Great. We're live today.
(Applause)
MG: Ray, give the crowd a gift of the holographic brain we saw from the video earlier. Guys, this is not only going to change phones, it's also going to change the way we collaborate.
Thank you so much.
Thanks, Ray.
Ray: You're welcome.
(Applause)
MG: So folks, this is the message that I discovered in that bar in 2011: The future of computers is not locked inside one of these screens. It's right here, inside of us.
(Applause)
So if there's one idea that I could leave you with here today, it's that the natural machine is not some figment of the future, it's right here in 2016. Which is why all hundred of us at Meta, including the administrative staff, the executives, the designers, the engineers -- before TED2017, we're all going to be throwing away our external monitors and replacing them with a truly and profoundly more natural machine.
Thank you very much.
(Applause)
Thank you, appreciate it. Thanks, guys.
Chris Anderson: So help me out on one thing, because there've been a few augmented reality demos shown over the last year or so out there. And there's sometimes a debate among technologists about, are we really seeing the real thing on-screen? There's this issue of field of view, that somehow the technology is showing a broader view than you would actually see wearing the glasses. Were we seeing the real deal there?
MG: Absolutely the real deal. Not only that, we took extra measures to shoot it with a GoPro through the actual lens in the various videos that you've seen here. We want to try to simulate the experience for the world that we're actually seeing through the glasses, and not cut any corners.
CA: Thank you so much for showing us that.
MG: Thanks so much, I appreciate that.