I'm really excited to be here today. I'll show you some stuff that's just ready to come out of the lab, literally, and I'm really glad that you guys are going to be among the first to see it in person, because I really think this is going to really change the way we interact with machines from this point on.
Now, this is a rear-projected drafting table. It's about 36 inches wide and it's equipped with a multi-touch sensor. Normal touch sensors that you see, like on a kiosk or interactive whiteboards, can only register one point of contact at a time. This thing allows you to have multiple points at the same time. They can use both my hands; I can use chording actions; I can just go right up and use all 10 fingers if I wanted to. You know, like that.
Now, multi-touch sensing isn't completely new. People like Bill Buxton have been playing around with it in the '80s. However, the approach I built here is actually high-resolution, low-cost, and probably most importantly, very scalable. So, the technology, you know, isn't the most exciting thing here right now, other than probably its newfound accessibility. What's really interesting here is what you can do with it and the kind of interfaces you can build on top of it. So let's see.
So, for instance, we have a lava lamp application here. Now, you can see, I can use both of my hands to kind of squeeze and put the blobs together. I can inject heat into the system here, or I can pull it apart with two of my fingers. It's completely intuitive; there's no instruction manual. The interface just kind of disappears. This started out as a screensaver app that one of the Ph.D. students in our lab, Ilya Rosenberg, made. But I think its true identity comes out here.
Now what's great about a multi-touch sensor is that, you know, I could be doing this with as many fingers here, but of course multi-touch also inherently means multi-user. Chris could be interacting with another part of Lava, while I play around with it here. You can imagine a new kind of sculpting tool, where I'm kind of warming something up, making it malleable, and then letting it cool down and solidifying in a certain state. Google should have something like this in their lobby.
(Laughter)
I'll show you a little more of a concrete example here, as this thing loads. This is a photographer's light-box application. Again, I can use both of my hands to interact and move photos around. But what's even cooler is that if I have two fingers, I can actually grab a photo and then stretch it out like that really easily. I can pan, zoom and rotate it effortlessly. I can do that grossly with both of my hands, or I can do it just with two fingers on each of my hands together. If I grab the canvas, I can do the same thing -- stretch it out. I can do it simultaneously, holding this down, and gripping on another one, stretching this out.
Again, the interface just disappears here. There's no manual. This is exactly what you expect, especially if you haven't interacted with a computer before. Now, when you have initiatives like the $100 laptop, I kind of cringe at the idea of introducing a whole new generation to computing with this standard mouse-and-windows-pointer interface. This is something that I think is really the way we should be interacting with machines from now on.
(Applause)
Now, of course, I can bring up a keyboard.
(Laughter) And I can bring that around, put that up there. Obviously, this is a standard keyboard, but of course I can rescale it to make it work well for my hands. That's really important, because there's no reason in this day and age that we should be conforming to a physical device. That leads to bad things, like RSI. We have so much technology nowadays that these interfaces should start conforming to us. There's so little applied now to actually improving the way we interact with interfaces from this point on. This keyboard is probably actually the really wrong direction to go. You can imagine, in the future, as we develop this kind of technology, a keyboard that kind of automatically drifts as your hand moves away, and really intelligently anticipates which key you're trying to stroke. So -- again, isn't this great?
(Laughter)
Audience: Where's your lab?
Jeff Han: I'm a research scientist at NYU in New York.
Here's an example of another kind of app. I can make these little fuzz balls. It'll remember the strokes I'm making. Of course I can do it with all my hands. It's pressure-sensitive. What's neat about that is, I showed that two-finger gesture that zooms in really quickly. Because you don't have to switch to a hand tool or the magnifying glass tool, you can just continuously make things in real multiple scales, all at the same time. I can create big things out here, but I can go back and really quickly go back to where I started, and make even smaller things here.
This is going to be really important as we start getting to things like data visualization. For instance, I think we all enjoyed Hans Rosling's talk, and he really emphasized the fact I've been thinking about for a long time: We have all this great data, but for some reason, it's just sitting there. We're not accessing it. And one of the reasons why I think that is will be helped by things like graphics and visualization and inference tools, but I also think a big part of it is going to be having better interfaces, to be able to drill down into this kind of data, while still thinking about the big picture here.
Let me show you another app here. This is called WorldWind. It's done by NASA. We've all seen Google Earth; this is an open-source version of that. There are plug-ins to be able to load in different data sets that NASA's collected over the years. As you can see, I can use the same two-fingered gestures to go down and go in really seamlessly. There's no interface, again. It really allows anybody to kind of go in -- and it just does what you'd expect, you know? Again, there's just no interface here. The interface just disappears. I can switch to different data views. That's what's neat about this app here. NASA's really cool. These hyper-spectral images are false-colored so you can -- it's really good for determining vegetative use. Well, let's go back to this.
The great thing about mapping applications -- it's not really 2D, it's 3D. So, again, with a multi-point interface, you can do a gesture like this -- so you can be able to tilt around like that --
(Surprised laughter)
It's not just simply relegated to a kind of 2D panning and motion. This gesture is just putting two fingers down -- it's defining an axis of tilt -- and I can tilt up and down that way. We just came up with that on the spot, it's probably not the right thing to do, but there's such interesting things you can do with this interface. It's just so much fun playing around with it, too.
(Laughter)
And so the last thing I want to show you is -- I'm sure we can all think of a lot of entertainment apps that you can do with this thing. I'm more interested in the creative applications we can do with this. Now, here's a simple application here -- I can draw out a curve. And when I close it, it becomes a character. But the neat thing about it is I can add control points. And then what I can do is manipulate them with both of my fingers at the same time. And you notice what it does. It's kind of a puppeteering thing, where I can use as many fingers as I have to draw and make --
Now, there's a lot of actual math going on under here for this to control this mesh and do the right thing. This technique of being able to manipulate a mesh here, with multiple control points, is actually state of the art. It was released at SIGGRAPH last year. It's a great example of the kind of research I really love: all this compute power to make things do the right things, intuitive things, to do exactly what you expect.
So, multi-touch interaction research is a very active field right now in HCI. I'm not the only one doing it, a lot of other people are getting into it. This kind of technology is going to let even more people get into it, I'm looking forward to interacting with all of you over the next few days and seeing how it can apply to your respective fields.
Thank you.
(Applause)