Chris Anderson: I want you to come with me to Tesla's huge gigafactory in Austin, Texas. So the day before it opened last week, the evening before, I was allowed to walk around it, no one else there. And what I saw there was, honestly, pretty mind-blowing. This is Elon Musk's famous machine that builds the machine, and in his view, the secret to a sustainable future, is not just making electric car. It's making a system that churns out huge numbers of electric cars with a margin, so that they can fund further growth.
When I was there, none of us knew whether Elon would actually be able to make it here today, so I took the chance to sit down with him and record an epic interview. And I just want to show you an eight-minute excerpt of that interview. So here from Austin, Texas, Elon Musk.
(Video) I want us to switch now to think a bit about artificial intelligence. I’m curious about your timelines and how you predict and how come some things are so amazingly on the money and some aren't. So when it comes to predicting sales of Tesla vehicles, for example, you've kind of been amazing. I think in 2014 when Tesla had sold that year 60,000 cars, you said, "2020, I think we will do half a million a year."
Elon Musk: Yeah, we did almost exactly a half million.
CA: Five years ago, last time you came to TED, I asked you about full self-driving, and you said, “Yeah, this very year I'm confident that we will have a car going from LA to New York without any intervention."
EM: Yeah, I don't want to blow your mind, but I'm not always right.
CA: (Laughs) What's the difference between those two? Why has full self-driving in particular been so hard to predict?
EM: I mean, the thing that really got me, and I think it's going to get a lot of other people, is that there are just so many false dawns with self-driving where you think you've got the problem, have a handle on the problem, and then it, no, turns out you just hit a ceiling. Because if you were to plot the progress, the progress looks like a log curve. So it's like a series of log curves. So most people don't know what a log curve is, I suppose.
CA: Show the shape with your hands.
EM: It goes up, you know, sort of a fairly straight way, and then it starts tailing off and you start getting diminishing returns. You know, in retrospect, they seem obvious, but in order to solve full self-driving properly, you actually have to solve real-world AI. Because what are the road networks designed to work with? They're designed to work with a biological neural net, our brains, and with vision, our eyes. And so in order to make it work with computers, you basically need to solve real-world AI and vision. Because we need cameras and silicon neural nets in order to have self-driving work for a system that was designed for eyes and biological neural nets. You know, I guess when you put it that way, it's sort of, like, quite obvious that the only way to solve full self-driving is to solve real-world AI and sophisticated vision.
CA: What do you feel about the current architecture? Do you think you have an architecture now where there is a chance for the logarithmic curve not to tail off anytime soon?
EM: Well I mean, admittedly these maybe infamous last words, but I actually am confident that we will solve it this year. That we will exceed -- The probability of an accident, at what point do you exceed that of the average person? I think we will exceed that this year. We could be here talking again in a year, like, well, another year went by and it didn't happen. But I think this is the year.
CA: Is there an element that you actually deliberately make aggressive prediction timelines to drive people to be ambitious? Without that, nothing gets done?
So it feels like at some point in the last year, seeing the progress on understanding that the Tesla AI understanding the world around it, led to a kind of, an aha moment at Tesla. Because you really surprised people recently when you said probably the most important product development going on at Tesla this year is this robot, Optimus. Is it something that happened in the development of full self-driving that gave you the confidence to say, "You know what, we could do something special here."
EM: Yeah, exactly. So, you know, it took me a while to sort of realize that in order to solve self-driving, you really needed to solve real-world AI. And at the point of which you solve real-world AI for a car, which is really a robot on four wheels, you can then generalize that to a robot on legs as well. The things that are currently missing are enough intelligence for the robot to navigate the real world and do useful things without being explicitly instructed. So the missing things are basically real-world intelligence and scaling up manufacturing. Those are two things that Tesla is very good at. And so then we basically just need to design the specialized actuators and sensors that are needed for a humanoid robot. People have no idea, this is going to be bigger than the car.
CA: So talk about -- I think the first applications you've mentioned are probably going to be in manufacturing, but eventually, the vision is to have these available for people at home. If you had a robot that really understood the 3D architecture of your house and knew where every object in that house was or was supposed to be and could recognize all those objects, I mean, that's kind of amazing, isn't it, like, the kind of thing that you could ask a robot to do would be what? Like, tidy up?
EM: Yeah, absolutely. Make dinner, I guess, mow the lawn.
CA: Take a cup of tea to grandma and show her family pictures.
EM: Exactly. Take care of my grandmother and make sure --
CA: It could obviously recognize everyone in the home. It could play catch with your kids.
EM: Yes. I mean, obviously, we need to be careful this doesn't become a dystopian situation. I think one of the things that's going to be important is to have a localized ROM chip on the robot that cannot be updated over the air. Where if you, for example, were to say, “Stop, stop, stop,” if anyone said that, then the robot would stop, you know, type of thing. And that's not updatable remotely. I think it's going to be important to have safety features like that.
CA: Yeah, that sounds wise.
EM: And I do think there should be a regulatory agency for AI. I've said this for many years. I don't love being regulated, but I think this is an important thing for public safety.
CA: Do you think there will be basically like in, say, 2050 or whatever, like, a robot in most homes, is what there will be, and people will love them and count on them? You'll have your own butler basically.
EM: Yeah, you'll have your sort of buddy robot probably, yeah.
CA: I mean, how much of a buddy? How many applications have you thought, you know, can you have a romantic partner, a sex partner?
EM: It's probably inevitable. I mean, I did promise the internet that I’d make catgirls. We could make a robot catgirl.
CA: Be careful what you promise the internet.
(Laughter)
EM: So, yeah, I guess it'll be whatever people want really, you know.
CA: What sort of timeline should we be thinking about of the first models that are actually made and sold?
EM: Well, you know, the first units that we intend to make are for jobs that are dangerous, boring, repetitive and things that people don't want to do. And, you know, I think we'll have like, an interesting prototype some time this year. We might have something useful next year, but I think quite likely within at least two years. And then we'll see rapid growth year over year of the usefulness of the humanoid robots and decrease in cost and scaling up production.
CA: Help me on the economics of this. So what do you picture the cost of one of these being?
EM: Well, I think the cost is actually not going to be crazy high. Like, less than a car.
CA: But think about the economics of this. If you can replace a 30,000-dollar, 40,000-dollar-a-year worker, which you have to pay every year, with a one-time payment of 25,000 dollars for a robot that can work longer hours, doesn't go an vacation, it could be a pretty rapid replacement of certain types of jobs. How worried should the world be about that?
EM: I wouldn't worry about the sort of, putting people out of a job thing. I think we're actually going to have, and already do have, a massive shortage of labor. So I think we will have ... Not people out of work, but actually still a shortage of labor even in the future. But this really will be a world of abundance. Any goods and services will be available to anyone who wants them. It'll be so cheap to have goods and services, it will be ridiculous.
(Applause)
CA: And now, arguably, the biggest visionary of them all, Elon Musk.
(Applause)
Hey, Elon, welcome. So, Elon, a few hours ago, you made an offer to buy Twitter.
(Laughter)
Why?
(Laughter)
EM: How'd you know?
(Laughter)
CA: A little bird tweeted in my ear or something, I don't know.
EM: By the way, have you seen the movie “Ted,” about the bear?
CA: I have.
EM: It's a good movie.
(Laughs)
CA: Don't mention that here.
(Laughter)
EM: So, yeah, yeah, so ... Was there a question?
CA: Why make that offer?
EM: Oh, so, well, I think it's very important for there to be an inclusive arena for free speech where all ... Twitter has become, kind of, the de facto town square. It’s just really important that people have both the reality and the perception that they are able to speak freely within the bounds of the law. And, you know, one of the things that I believe Twitter should do is open-source the algorithm and make any changes to people’s tweets, or if they're emphasized or de-emphasized, that action should be made apparent so anyone can see that action has been taken. So there's no, sort of, behind the scenes manipulation, either algorithmically or manually.
(Applause) CA: Last week when we spoke, Elon, I asked you whether you were thinking of taking over. You said, no way. You said, “I do not want to own Twitter. It is a recipe for misery. Everyone will blame me for everything." What on Earth changed?
EM: No I think everyone will still blame me for everything.
(Laughter)
If I acquire Twitter and something goes wrong, it’s my fault, 100 percent. I think there will be quite a few arrows, yes.
CA: It will be miserable. But you still want to do it. Why?
EM: I mean, I hope it's not too miserable. I just think it's important to the ... It’s important to the function of democracy. It’s important to the function of the United States as a free country and many other countries. And to actually help freedom in the world, more broadly than the US. And so I think it's, you know, I think the civilizational risk is decreased the more we can increase the trust of Twitter as a public platform. And so I do think this will be somewhat painful. And I’m not sure that I will actually be able to acquire it. And I should also say the intent is to retain as many shareholders as is allowed by the law in a private company, which I think is around 2,000 or so. So it's definitely not, from the standpoint of, "Let me figure out how to monopolize or maximize my ownership of Twitter." But I will try to bring along as many shareholders as we're allowed to.
CA: You don't necessarily want to pay out 40 or whatever it is, billion dollars in cash. You'd like them to come with you in the new company.
EM: I mean, you know, I could technically afford it.
CA: I heard that.
EM: But what I'm saying is, this is not a way to, sort of, make money, you know. It's just that I think this is ... My strong intuitive sense is that having a public platform that is maximally trusted and broadly inclusive is extremely important to the future of civilization.
CA: But you've described yourself --
EM: I don't care about the economics at all.
CA: OK, that's cool to hear. This is not about the economics. It’s for the moral good that you think it will achieve. You’ve described yourself, Elon, as a “free speech absolutist.” But does that mean that there's literally nothing that people can't say, and it's OK?
EM: Well, I think, obviously, Twitter or any forum is bound by the laws of the country that it operates in. So obviously, there are some limitations on free speech in the US. And of course, Twitter would have to abide by those rules.
CA: So you can't incite people to violence, like, a direct incitement to violence you know, you can't do the equivalent of crying "fire" in a movie theater, for example.
EM: No, that would be a crime. It should be a crime.
CA: But here's the challenge, it's such a nuanced difference between different things. So there's incitement to violence. That's a no if it's illegal. There's hate speech, which, some forms of hate speech are fine. You know, "I hate spinach."
EM: I mean, if it's sauteed in, you know, cream sauce, it can be quite nice.
CA: But the problem is, so let's say someone says -- OK, here's one tweet, "I hate politician X." Next tweet is, "I wish politician X wasn't alive," as some of us have said about Putin right now, for example. So that’s legitimate speech. Another tweet is, "I wish politician X wasn't alive" with a picture of their head with a gunsight over it. Or that plus their address. I mean, at some point someone has to make a decision as to which of those is not OK. Can an algorithm do that, or surely you need human judgment at some point?
EM: No, I think, like I said, in my view, Twitter should match the laws of the country. And really, you know, there's an obligation to do that. But going beyond that, and having it be unclear who's making what changes to where, having tweets sort of, mysteriously be promoted and demoted with no insight into what's going on, having a black-box algorithm promote some things and not other things, I think this can be quite dangerous.
CA: So, the idea of opening the algorithm is a huge deal, and I think many people would welcome that of understanding exactly how it's making the decision.
EM: And critique it, like, what I mean is like, I think the code should be on GitHub, you know. So people can look through it and say, like, "I see a problem here." "I don't agree with this." They can highlight issues, suggest changes in the same way that you update Linux or Signal or something like that, you know.
CA: As I understand it, at some point right now, what the algorithm would do is it would look at, for example, how many people have flagged a tweet as obnoxious. And then at some point a human has to look at it and make a decision as to, does this cross the line or not, that the algorithm itself can't, I don't think yet, tell the difference between legal and OK and definitely obnoxious. And so the question is, which humans, you know, make that call? I mean, do you have a picture of that? Right now, Twitter and Facebook and others, you know, they've hired thousands of people to try to help make wise decisions. And the trouble is that no one can agree on what is wise. How do you solve that?
EM: Well I think we would want to err on the side -- if in doubt, let the speech exist. If it's, you know, a gray area, I would say let the tweet exist. But obviously, you know, in a case where there's perhaps a lot of controversy that you would not want to necessarily promote that tweet.
So I'm not saying that I have all the answers here. But I do think that we want to be just very reluctant to delete things and just be very cautious with permanent bans. You know, time-outs, I think, are better than, sort of, permanent bans. And ... But just in general, like I said ... It won’t be perfect, but I think we want it to really have the perception and reality that speech is as free as reasonably possible. And a good sign as to whether there’s free speech is: is someone you don’t like allowed to say something you don’t like? And if that is the case, then we have free speech. And it's damn annoying when someone you don't like says something you don't like. That is a sign of a healthy, functioning, free speech situation.
(Applause)
CA: I think many people would agree with that. And looking at the reaction online, many people are excited by you coming in and the changes you're proposing. Some others are absolutely horrified. Here's how they would see it. They would say, "Wait a sec. We agree that Twitter is an incredibly important town square. It is where the world exchanges opinion about life and death matters. How on Earth could it be owned by the world’s richest person? That can't be right." So what's the response there? Is there any way that you can distance yourself from the actual decision making that matters on content in some very clear way that is convincing to people?
EM: Well, like I said, I think it's very important that the algorithm be open-sourced and that any manual adjustments be identified. So if somebody did something to a tweet, there's information attached to it that action was taken. And I won't personally be, you know, in there editing tweets. But you'll know if something was done to promote, demote or otherwise affect a tweet.
You know, as for media sort of ownership, I mean, you've got, you know, Mark Zuckerberg owning Facebook and Instagram and WhatsApp and with a share ownership structure that will have Mark Zuckerberg XIV still controlling those entities. Like, literally.
(Laughter)
We won't have that at Twitter.
CA: If you commit to opening up the algorithm that definitely gives some level of confidence. Talk about some of the other changes that you've proposed. The edit button, that's definitely coming if you have your way.
EM: Yeah. I think, I mean, frankly ... The top priority I would have is eliminating the spam and scam bots and the bot armies that are on Twitter.
(Applause)
You know, I think these influence -- they make the product much worse. You know, if I had a Dogecoin for every crypto scam I saw.
(Laughter)
I would have 100 billion Dogecoin.
CA: Do you regret sparking a sort of, storm of excitement over DOGE and, you know, where it's gone?
EM: I mean, I think DOGE is fun. And, you know, I've always said don't bet the farm on Dogecoin, FYI.
(Laughs)
But I think it’s, I like dogs and I like memes, and it's got both of those.
(Laughter)
CA: But just on the edit button, how do you get around the problem of, so someone tweets, “Elon rocks” and it’s tweeted by two million people. And then after that, they edit it “Elon sucks,” and then all those retweets they're all embarrassed. How do you avoid that type of changing of meaning so that retweeters are exploited?
EM: Well, I think you'd only have the edit capability for a short period of time. And probably the thing to do upon the edit would be to zero out all retweets and favorites.
CA: OK.
EM: I'm open to ideas though, you know.
CA: So in one way, the algorithm works kind of well for you right now, I wanted to show you this. This is a typical tweet of mine, kind of lame and wordy and whatever. And the amazing response it gets is this, oh, my God, 97 likes. And then I tried another one.
[I get to interview @elonmusk on Thursday here at TED. What questions would YOU ask him?]
(Laughter)
29,000 likes. So the algorithm, at least seems to be, at the moment, you know: “If (Elon Musk), expand to the world immediately.” Not bad, right?
EM: Yeah, I guess so, I mean, that is cool.
(Laughter)
CA: So help us understand how it is you've built this incredible following on Twitter yourself when some of the people who love you the most look at some of what you tweet and they think it's somewhere between embarrassing and crazy, some of it's amazing.
EM: It is a little ...
CA: (Laughs) But is that actually why it's worked, why has it worked?
EM: I mean, I don't know. I'm tweeting more or less stream of consciousness, you know. It's not like, let me think about some grand plan about my Twitter or whatever. You know, I'm, like, literally, on the toilet or something, like, "Ha ha, this is funny." And then tweet that out. That's like most of them. (Laughs) That's, you know, oversharing.
(Laughter)
CA: But you are obsessed with getting the most out of every minute of your day. And so why not?
EM: So, I don't know. I try to tweet out things that are interesting or funny. And then people seem to like it.
CA: So if you are unsuccessful -- actually, before I ask that, let me ask this. So how can I say, is funding secured?
(Laughter)
EM: I have sufficient assets to complete ... This is not a forward-looking statement, blah blah blah.
(Laughs)
I mean, I can do it if possible. I should say, actually even originally with Tesla back in the day, funding was actually secured. I want to be clear about that. In fact, this may be a good opportunity to clarify that. Funding was indeed secured, and I should say, like, why do I not have respect for the SEC in that situation? And I don't mean to blame everyone at the SEC, but certainly the San Francisco office, it's because the SEC knew that funding was secured, but they pursued an active public investigation nonetheless. At the time, Tesla was in a precarious financial situation, and I was told by the banks that if I did not agree to settle with the SEC that the banks would cease providing working capital and Tesla would go bankrupt immediately. So that's like having a gun to your child's head. So I was forced to concede to the SEC unlawfully. Those bastards.
(Laughter)
And now it makes it look like I lied when I did not, in fact, lie. I was forced to admit that I lied to save Tesla's life. And that's the only reason.
CA: Given what's actually happened --
(Applause)
Given what's actually happened to Tesla since then, though, aren't you glad that you didn't take it private?
EM: Yeah. I mean ... It's difficult to put yourself in the position at the time, Tesla was under the most relentless short seller attack in the history of the stock market. There’s something called “short and distort,” where the barrage of negativity that Tesla was experiencing from short sellers on Wall Street was beyond all belief. Tesla was the most shorted stock in the history of stock markets. This is saying something. So, you know, this was affecting our ability to hire people, it was affecting our ability to sell cars, it was ... Yeah, it was terrible. They wanted Tesla to die so bad, they could taste it.
CA: Well, most of them have paid the price.
EM: Yes. Where are they now?
(Laughter)
CA: So that was a very strong statement. Obviously, a lot of people who support you, I would've thought, would say, you have so much to offer the world on the upside, on the vision side, don't waste your time getting distracted by these battles that bring out negativity and make people feel that you're being defensive. People don't like fights, especially with powerful government authorities. They'd rather buy into your dream. Aren't you encouraged by people just to edit that temptation out and go with the bigger story?
EM: Well, I mean, I would say, like, you know, I’m somewhat of a mixed bag, you know.
(Laughter)
CA: Well, you're a fighter and you don't ... You don't like to lose and you are determined that you don't. Basically, I mean.
EM: Sure I don't like to lose, I'm not sure many people do. But the truth matters to me a lot. Sort of, pathologically, it matters to me.
CA: OK. So you don't like to lose, if in this case you are not successful in, you know, the board does not accept your offer, you've said you won't go higher. Is there a plan B?
EM: There is.
(Laughter)
CA: I think we would like to hear a little bit about plan B.
(Laughter)
EM: For another time, I think.
CA: Another time, alright.
(Applause) I ... That's a nice tease. Alright.
(Laughter)
I would love to try to understand this brain of yours more, Elon. With your permission, I'd like to just play this. Oh, actually, before we do that, here was one of the thousands of questions that people asked. I thought this was actually quite a good one. "If you could go back in time and change one decision you made along the way" -- do your own edit button -- "which one would it be and why?"
EM: Do you mean like a career decision or something?
CA: Just any decision over the last few years, like your decision to invest in Twitter in the first place or your ... Anything.
EM: I mean, the worst business decision I ever made was not starting Tesla with just JB Straubel. By far the worst decision I've ever made is not just starting Tesla with JB. That's number one by far.
CA: Alright, so JB Straubel was the visionary cofounder who was obsessed with and knew so much about batteries. And your decision to go with Tesla, the company as it was, meant that you got locked into what you concluded was a weird architecture now.
EM: There's a lot of confusion. Tesla did not exist in any ... Tesla was a shell company with no employees, no intellectual property, when I invested. But a false narrative has been created by one of the other cofounders, Martin Eberhard. And I don't want to get into the nastiness here, but I didn't invest in an existing company. We created a company. And ultimately, the creation of that company was done by JB and me. And unfortunately, there's someone else, another cofounder who has made it his life’s mission to make it sound like he created the company, which is false.
CA: Wasn't there another issue right at the heart of the development of the Tesla Model 3, where Tesla almost went bankrupt? And I think you have said that part of the reason for that was that you overestimated the extent to which it was possible, at that time, to automate a factory. Huge amount was spent kind of over-automating, and it didn’t work, and it nearly took the company down. Is that fair?
EM: I mean, first of all, it's important to understand, like, what has Tesla actually accomplished that is most noteworthy. It is not the creation of an electric vehicle or creating an electric vehicle prototype or low-volume production of a car. There have been hundreds of car start-ups over the years, hundreds. And in fact, at one point, Bloomberg counted up the number of electric vehicle start-ups, and I think they got to almost 500. So the hard part is not creating a prototype or going into limited production. The absolutely difficult thing, which has not been accomplished by an American car company in 100 years, is reaching volume production without going bankrupt. That is the actual hard thing. The last company, American company, to reach volume production without going bankrupt, was Chrysler in the '20s.
CA: And it nearly happened to Tesla.
EM: Yes, but it's not like, oh, geez, I guess if we'd just done more manual stuff, things would have been fine. Of course not. That is definitely not the case. So we basically messed up almost every aspect of the Model 3 production line, from cells to packs to drive inverters, motors, body line, the paint shop, final assembly, everything. Everything was messed up. I lived in the Fremont and Nevada factories for three years, fixing that production line, running around like a maniac through every part of that factory. Living with the team. I slept on the floor, so the team, who was going through a hard time, could see me on the floor. That they knew that I was not in some ivory tower. When whatever pain they experienced, I had it more.
CA: And some people who knew you well actually thought you were making a terrible mistake, that you were driving yourself to the edge of sanity, almost. And that you were in danger of making bad choices. And in fact, I heard you say last week, Elon, that you, because of Tesla's huge value now and, you know, the significance of every minute that you spend, that you are in danger of sort of, obsessing over it, spending all this time to the edge of sanity. That doesn't sound super wise. Isn't there ... Your time, your completely sane, centered, rested time and decision making is more powerful and compelling than that sort of, "I can barely hold my eyes open." So surely it should be an absolute strategic priority to look after yourself.
EM: I mean, there wasn't any other way to make it work. They were three years of hell. 2017, 2018 and 2019 were the longest period of excruciating pain in my life. There wasn't any other way, and we barely made it. And we were on the ragged edge of bankruptcy the entire time. It's not like I want the pain, I don't like it. Those were three -- So, so, so much pain. But it had to be done or Tesla would be dead.
CA: When you looked around the Gigafactory that we saw images of earlier last week and just see where the company's come, do you feel that this challenge of figuring out the new way of manufacturing, that you actually have an edge now, that it’s different, that you've figured out how to do this. And that those three years, won't be repeated? You've actually figured out a new way of manufacturing.
EM: At this point, I think I know more about manufacturing than anyone currently alive on Earth.
(Laughter)
(Applause)
I can tell you how every damn part of that car is made. It's basically if you just live on the factory, you live in the factory for three years and -- (Music)
CA: That was nice.
EM: Poignant note to something.
(Laughter)
CA: Someone wants to compose a symphony to that expression of confidence, something like that. I have no idea what that is.
EM: Anyway, yeah. Every aspect of a car, six ways to Sunday, I know it.
CA: I mean, you talk about scale, right. You’re in the middle of writing your new master plan, and you've said that scale is at the heart of it. Why does scale matter? Why are you obsessed with it, what are you thinking?
EM: Yeah, well, see, in order to accelerate the advent of sustainable energy, there must be scale. Because we've got to transition a vast economy that is currently overly dependent on fossil fuels to a sustainable energy economy. One where the energy is -- Yeah, I mean, we've got to do it.
(Applause)
So the energy's got to be sustainably generated with wind, solar, hydro, geothermal, I’m a believer in nuclear as Isabelle [Boemeke] gave a talk about. And then since solar and wind is intermittent, you have to have stationary storage batteries, and then we've got to transition all transport to electric. If we do those things, we have a sustainable energy future. The faster we do those things, the less risk we put to the environment. So sooner is better.
And so scale is very important. You know, it's not about fresh releases. It's about tonnage. What was the tonnage of batteries produced and obviously done in a sustainable way? And our estimate is that approximately 300 terawatt hours of battery storage is needed to transition transport, electricity and heating and cooling to a fully electric situation. There may be some different estimates out there, but our estimate is 300 terawatt hours.
CA: So we dug into this a lot in the interview that we recorded last week, so people can go in and hear that more. But I mean, the context is, that is, I think about 1,000 times the current installed battery capacity. I mean, the scale up needed is breathtaking, basically. And ... So your vision is to commit Tesla to try to deliver on a meaningful percentage of what is needed. And call on others to do the rest. This is a task for humanity to massively scale up our response to change the energy grid.
EM: Yes. It's like basically how fast can we scale and encourage others to scale to get to that 300-terawatt-hour installed base of batteries?
CA: Right.
EM: And then, of course, there’ll be a tremendous need to recycle those batteries, and it makes sense to recycle them because the raw materials are like high grade ore. People shouldn't think there will be this big pile of batteries, they can get recycled, because even a dead battery pack is worth about 1,000 dollars. But this is what's needed for a sustainable energy future. So we're going to try to take a set of actions that accelerate the day and bring the day of a sustainable energy future sooner.
CA: OK.
(Applause)
There's going to be a huge interest in your master plan when you publish that.
Meanwhile, I just would love to understand more what goes on in this brain of yours, because it is a pretty unique one. I want to play, with your permission, this very funny opening from SNL, Saturday Night Live.
(Video) Thank you. Thank you very much. It's an honor to be hosting Saturday Night Live. I mean that. Sometimes after I say something, I have to say, "I mean that."
(Laughter)
So people really know that I mean it. That's because I don't always have a lot of intonational variation in how I speak.
(Laughter)
Which, I'm told, makes for great comedy.
(Laughter)
I'm actually making history tonight as the first person with Asperger's to host SNL.
(Applause and cheers)
CA: And I think you followed that up with --
EM: At least the first person to admit it.
CA: The first person to admit it.
(Laughter)
(Applause)
So this was a brave thing to say. But I would love to understand you know, how you think of Asperger's, like, whether you can give us any sense of, even you as a boy, what the experience was or as you now understand, with the benefit of hindsight. Can you talk about that a bit?
EM: Well, I think everyone's experience is going to be somewhat different. But I guess for me, the social cues were not intuitive. So I was just very bookish, and I didn’t understand ... I guess others could sort of, intuitively understand ... What was meant by something. I would just tend to take things very literally, like, the words, as spoken, were exactly what they meant. But then that turned out to be wrong. They're not simply saying exactly what they mean. There's all sorts of other things that are meant. It took me a while to figure that out. I was, you know, bullied quite a lot. So ... I did not have a sort of, happy childhood, to be frank. It was quite, quite rough. But I read a lot of books. I read lots and lots of books. And, you know, sort of, gradually understood more from the books that I was reading and watched a lot of movies and ... You know, just ... But it took me a while to understand things that most people intuitively understand.
CA: So I've wondered whether it's possible that that was, in a strange way, an incredible gift to you and indirectly to many other people. Inasmuch as ... Brain’s, you know, are plastic. And they go where the action is. And if for some reason the external world and social cues, which so many people spend so much time and mental energy obsessing over, if that is partly cut off, isn't it possible that that is partly what gave you the ability to understand inwardly the world at a much deeper level than most people do?
EM: I suppose that's certainly possible. I think there's maybe some value also from a technology standpoint, because I found it rewarding to spend all night programming computers just by myself. And I think most people, most people don't enjoy typing strange symbols into a computer by themselves all night. They think that's not fun. But I thought it was, I really liked it. So I would just program all night by myself. And I found that to be quite enjoyable. But I think that is not normal.
(Laughter)
CA: I've thought a lot about ... It's a riddle to a lot of people of how you've done this, how you've repeatedly innovated in these different industries. And you know, every entrepreneur sees possibility in the future and then acts to make that real. It feels to me like you see possibility just more broadly than almost anyone and can connect the dots. So you see scientific possibility based on a deep understanding of physics and knowing what the fundamental equations are, what the technologies are that are based on that science and where they could go, you see technological possibility. And then really unusually, you combine that with economic possibility, of like what it actually would cost, is there a system you can imagine where you could affordably make that thing, and that ... sometimes you then get conviction that there is an opportunity here. Put those pieces together and you could do something amazing.
EM: Yeah, I think one aspect of whatever condition I had was I was just absolutely obsessed with truth. Just obsessed with truth. And so the obsession with truth is why I studied physics, because physics attempts to understand the truth, the truth of the universe. Physics is just, what are the provable truths of the universe. And truths that have predictive power. So for me, physics was sort of, a very natural thing to study. Nobody made me study it. I was intrinsically interested to understand the nature of the universe. And then computer science or information theory also to just understand logic. And, you know, there's an argument that ... That information theory is actually operating at a more fundamental level, more fundamental level than even physics. So just yeah ... Physics and information theory were really interesting to me.
CA: So when you say truth, I mean, like, some people ... What you're talking about is the truth of the universe. The fundamental truths that drive the universe, it's like, a deep curiosity about what this universe is, why we're here, simulation, we don't have time to go into that. But I mean, you’re just deeply, deeply curious about, what this is for, what this is, this whole thing.
EM: Yes, I think the why of things is very important. When I was in young teens, I got quite depressed about the meaning of life. And I was trying to sort of, understand the meaning of life. I’m looking at, reading religious texts and reading books on philosophy. And I got into the German philosophers, which is definitely not wise if you're a young teenager, I have to say. It can be a bit dark. Much better read as an adult. And then actually, I ended up reading "The Hitchhiker's Guide to the Galaxy," which is actually a book on philosophy just sort of disguised as a silly humor book. But it's actually a philosophy book. And Adams makes the point that it's actually the question that is harder than the answer. You know, he sort of makes a joke that the answer is 42. That number does pop up a lot. And 420 is just 10 times 42.
CA: Ten times more significant than 42.
EM: But, you know, you can make a triangle with 42 degrees and two 69s.
(Laughter)
So there's no such thing as a perfect triangle, or is there?
(Laughter)
CA: But even more important than the answer is the question. That was the whole theme of that book. I mean, is that basically how you see meaning then, that it's the pursuit of questions?
EM: So I have a sort of, a proposal for a worldview or a motivating philosophy, which is to understand what questions to ask about the answer that is the universe. And to the degree that we expand the scope and scale of consciousness, biological and digital, we will be better able to ask these questions, to frame these questions and to understand why we're here, how we got here, what the heck is going on. And so that is my driving philosophy, is to expand the scope and scale of consciousness that we may better understand the nature of the universe.
CA: Elon, one of the things that was most touching last week --
(Applause)
was seeing you hang out with your kids. Here's, if I may ...
EM: He looks vaguely like a ventriloquist dummy there.
(Laughter)
I mean, how do you know that's real?
(Laughter)
CA: So that’s X, and it was just a delight seeing you hang out with him. What's his future going to be? I mean, I don't mean him personally, but the world he's going to grow up in. What future do you believe he will grow up in?
EM: Well, I mean, a very digital future.
(Laughs)
A very different world than I grew up in, that's for sure. But I think we want to obviously do our absolute best to ensure that the future is good for everyone's children. And that, you know, that the future is something that you can look forward to and not feel sad about. You know, you want to get up in the morning and be excited about the future. And we should fight for the things that make us excited about the future. You know, the future cannot just be about one miserable thing after another, solving one sad problem after another. There have got to be things that get you excited. Like, you want to live. These things are very important. You should have more of it.
CA: And it's not as if it's a done deal, like it's all to play for. Like, the future may be horrible. Still, there are scenarios where it is horrible, but you see a pathway to an exciting future both on Earth and on Mars and in our minds through artificial intelligence and so forth. I mean, in your heart of hearts, do you really believe that you are helping deliver that exciting future for X and for others?
EM: I mean, I'm trying my hardest to do so. I, you know, I love humanity, and I think that we should fight for a good future for humanity. And I think we should be optimistic about the future and fight to make that optimistic future happen.
(Applause)
CA: Elon, I think that's the perfect place to close this. Thank you so much for spending time coming here and for the work that you're doing. And good luck with finding a wise course through on Twitter and everything else.
EM: Alright, thank you.
(Applause)