So, where are the robots? We've been told for 40 years already that they're coming soon. Very soon they'll be doing everything for us. They'll be cooking, cleaning, buying things, shopping, building. But they aren't here. Meanwhile, we have illegal immigrants doing all the work, but we don't have any robots. So what can we do about that? What can we say? So I want to give a little bit of a different perspective of how we can perhaps look at these things in a little bit of a different way. And this is an x-ray picture of a real beetle, and a Swiss watch, back from '88. You look at that -- what was true then is certainly true today. We can still make the pieces. We can make the right pieces. We can make the circuitry of the right computational power, but we can't actually put them together to make something that will actually work and be as adaptive as these systems.
So let's try to look at it from a different perspective. Let's summon the best designer, the mother of all designers. Let's see what evolution can do for us. So we threw in -- we created a primordial soup with lots of pieces of robots -- with bars, with motors, with neurons. Put them all together, and put all this under kind of natural selection, under mutation, and rewarded things for how well they can move forward. A very simple task, and it's interesting to see what kind of things came out of that.
So if you look, you can see a lot of different machines come out of this. They all move around. They all crawl in different ways, and you can see on the right, that we actually made a couple of these things, and they work in reality. These are not very fantastic robots, but they evolved to do exactly what we reward them for:
for moving forward. So that was all done in simulation, but we can also do that on a real machine. Here's a physical robot that we actually have a population of brains, competing, or evolving on the machine. It's like a rodeo show. They all get a ride on the machine, and they get rewarded for how fast or how far they can make the machine move forward. And you can see these robots are not ready to take over the world yet, but they gradually learn how to move forward, and they do this autonomously.
So in these two examples, we had basically machines that learned how to walk in simulation, and also machines that learned how to walk in reality. But I want to show you a different approach, and this is this robot over here, which has four legs. It has eight motors, four on the knees and four on the hip. It has also two tilt sensors that tell the machine which way it's tilting.
But this machine doesn't know what it looks like. You look at it and you see it has four legs, the machine doesn't know if it's a snake, if it's a tree, it doesn't have any idea what it looks like, but it's going to try to find that out. Initially, it does some random motion, and then it tries to figure out what it might look like. And you're seeing a lot of things passing through its minds, a lot of self-models that try to explain the relationship between actuation and sensing. It then tries to do a second action that creates the most disagreement among predictions of these alternative models, like a scientist in a lab. Then it does that and tries to explain that, and prune out its self-models.
This is the last cycle, and you can see it's pretty much figured out what its self looks like. And once it has a self-model, it can use that to derive a pattern of locomotion. So what you're seeing here are a couple of machines -- a pattern of locomotion. We were hoping that it wass going to have a kind of evil, spidery walk, but instead it created this pretty lame way of moving forward.
But when you look at that, you have to remember that this machine did not do any physical trials on how to move forward, nor did it have a model of itself. It kind of figured out what it looks like, and how to move forward, and then actually tried that out. (Applause)
So, we'll move forward to a different idea. So that was what happened when we had a couple of -- that's what happened when you had a couple of -- OK, OK, OK -- (Laughter) -- they don't like each other. So there's a different robot. That's what happened when the robots actually are rewarded for doing something. What happens if you don't reward them for anything, you just throw them in?
So we have these cubes, like the diagram showed here. The cube can swivel, or flip on its side, and we just throw 1,000 of these cubes into a soup -- this is in simulation --and don't reward them for anything, we just let them flip. We pump energy into this and see what happens in a couple of mutations. So, initially nothing happens, they're just flipping around there. But after a very short while, you can see these blue things on the right there begin to take over.
They begin to self-replicate. So in absence of any reward, the intrinsic reward is self-replication. And we've actually built a couple of these, and this is part of a larger robot made out of these cubes. It's an accelerated view, where you can see the robot actually carrying out some of its replication process. So you're feeding it with more material -- cubes in this case -- and more energy, and it can make another robot. So of course, this is a very crude machine, but we're working on a micro-scale version of these, and hopefully the cubes will be like a powder that you pour in.
OK, so what can we learn? These robots are of course not very useful in themselves, but they might teach us something about how we can build better robots, and perhaps how humans, animals, create self-models and learn. And one of the things that I think is important is that we have to get away from this idea of designing the machines manually, but actually let them evolve and learn, like children, and perhaps that's the way we'll get there. Thank you. (Applause)