We've built artificial intelligence already that, on specific tasks, performs better than humans. There's AI that can play chess and beat human grandmasters. But since the introduction of generative AI to the general public a couple years ago, there's been more talk about artificial general intelligence, or AGI, and that describes the idea that there's AI that can perform at or above human levels on a wide variety of tasks, just like we humans are able to do. And people who think about AGI are worried about what it means if we reach that level of performance in the technology.
Right now, there's people from the tech industry coming out and saying "The AI that we're building is so powerful and dangerous that it poses a threat to civilization.” And they’re going to government and saying, "Maybe you need to regulate us."
Now normally when an industry makes a powerful new tool, they don't say it poses an existential threat to humanity and that it needs to be limited, so why are we hearing that language? And I think there's two main reasons. One is if your technology is so powerful that it can destroy civilization, between now and then, there's an awful lot of money to be made with that. And what better way to convince your investors to put some money with you than to warn that your tool is that dangerous?
The other is that the idea of AI overtaking humanity is truly a cinematic concept. We’ve all seen those movies. And it’s kind of entertaining to think about what that would mean now with tools that we're actually able to put our hands on. In fact, it’s so entertaining that it’s a very effective distraction from the real problems already happening in the world because of AI. The more we think about these improbable futures, the less time we spend thinking about how do we correct deepfakes or the fact that there's AI right now being used to decide whether or not people are let out of prison, and we know it’s racially biased.
But are we anywhere close to actually achieving AGI? Some people think so. Elon Musk said that we'll achieve it within a year. I think he posted this a few weeks ago. But like at the same time Google put out their eye search tool that's supposed to give you the answer so you don’t have to click on a link, and it's not going super well.
["How many rocks should I eat?"]
["... at least a single serving of pebbles, geodes or gravel ..."]
Please don't eat rocks.
(Laughter)
Now of course these tools are going to get better. But if we're going to achieve AGI or if they're even going to fundamentally change the way we work, we need to be in a place where they are continuing on a sharp upward trajectory in terms of their abilities. And that may be one path. But there's also the possibility that what we're seeing is that these tools have basically achieved what they're capable of doing, and the future is incremental improvements in a plateau. So to understand the AI future, we need to look at all the hype around it and get under there and see what's technically possible. And we also need to think about where are the areas that we need to worry and where are the areas that we don't.
So if we want to realize the hype around AI, the one main challenge that we have to solve is reliability. These algorithms are wrong all the time, like we saw with Google. And Google actually came out and said, after these bad search results were popularized, that they don't know how to fix this problem. I use ChatGPT every day. I write a newsletter that summarizes discussions on far-right message boards, and so I download that data, ChatGPT helps me write a summary. And it makes me much more efficient than if I had to do it by hand. But I have to correct it every day because it misunderstands something, it takes out the context. And so because of that, I can't just rely on it to do the job for me. And this reliability is really important.
Now a subpart of reliability in this space is AI hallucination, a great technical term for the fact that AI just makes stuff up a lot of the time. I did this in my newsletter. I said, ChatGPT are there any people threatening violence? If so, give me the quotes. And it produced these three really clear threats of violence that didn't sound anything like people talk on these message boards. And I went back to the data, and nobody ever said it. It just made it up out of thin air. And you may have seen this if you've used an AI image generator. I asked it to give me a close up of people holding hands. That's a hallucination and a disturbing one at that.
(Laughter)
We have to solve this hallucination problem if this AI is going to live up to the hype. And I don't think it's a solvable problem. With the way this technology works, there are people who say, we're going to have it taken care of in a few months, but there’s no technical reason to think that’s the case. Because generative AI always makes stuff up. When you ask it a question, it's creating that answer or creating that image from scratch when you ask. It's not like a search engine that goes and finds the right answer on a page. And so because its job is to make things up every time, I don't know that we're going to be able to get it to make up correct stuff and then not make up other stuff. That's not what it's trained to do, and we're very far from achieving that.
And in fact, there are spaces where they're trying really hard. One space that there's a lot of enthusiasm for AI is in the legal area where they hope it will help write legal briefs or do research. Some people have found out the hard way that they should not write legal briefs right now with ChatGPT and send them to federal court, because it just makes up cases that sound right. And that's a really fast way to get a judge mad at you and to get your case thrown out. Now there are legal research companies right now that advertise hallucination-free generative AI. And I was really dubious about this. And researchers at Stanford actually went in and checked it, and they found the best-performing of these hallucination-free tools still hallucinates 17 percent of the time.
So like on one hand, it's a great scientific achievement that we have built a tool that we can pose basically any query to, and 60 or 70 or maybe even 80 percent of the time it gives us a reasonable answer. But if we're going to rely on using those tools and they're wrong 20 or 30 percent of the time, there's no model where that's really useful.
And that kind of leads us into how do we make these tools that useful? Because even if you don't believe me and think we're going to solve this hallucination problem, we're going to solve the reliability problem, the tools still need to get better than they are now. And there's two things they need to do that. One is lots more data and two is the technology itself has to improve.
So where are we going to get that data? Because they've kind of taken all the reliable stuff online already. And if we were to find twice as much data as they've already had, that doesn't mean they're going to be twice as smart. I don't know if there's enough data out there, and it's compounded by the fact that one way the generative AI has been very successful is at producing low-quality content online. That's bots on social media, misinformation, and these SEO pages that don't really say anything but have a lot of ads and come up high in the search results. And if the AI starts training on pages that it generated, we know from decades of AI research that they just get progressively worse. It's like the digital version of mad cow disease.
(Laughter)
Let's say we solve the data problem. You still have to get the technology better. And we've seen 50 billion dollars in the last couple years invested in improving generative AI. And that's resulted in three billion dollars in revenue. So that's not sustainable. But of course it's early, right? Companies may find ways to start using this technology. But is it going to be valuable enough to justify the tens and maybe hundreds of billions of dollars of hardware that needs to be bought to make these models get better? I don't think so. And we can kind of start looking at practical examples to figure that out. And it leads us to think about where are the spaces we need to worry and not.
Because one place that everybody's worried with this is that AI is going to take all of our jobs. Lots of people are telling us that’s going to happen, and people are worried about it. And I think there's a fundamental misunderstanding at the heart of that.
So imagine this scenario. We have a company, and they can afford to employ two software engineers. And if we were to give those engineers some generative AI to help write code, which is something it's pretty good at, let's say they're twice as efficient. That's a big overestimate, but it makes the math easy. So in that case, the company has two choices. They could fire one of those software engineers because the other one can do the work of two people now, or they already could afford two of them, and now they're twice as efficient, so they're bringing in more money. So why not keep both of them and take that extra profit? The only way this math fails is if the AI is so expensive that it's not worth it. But that would be like the AI is 100,000 dollars a year to do one person's worth of work. So that sounds really expensive. And practically, there are already open-source versions of these tools that are low-cost, that companies can install and run themselves. Now they don’t perform as well as the flagship models, but if they're half as good and really cheap, wouldn't you take those over the one that costs 100,000 dollars a year to do one person's work? Of course you would.
And so even if we solve reliability, we solve the data problem, we make the models better, the fact that there are cheap versions of this available suggests that companies aren't going to be spending hundreds of millions of dollars to replace their workforce with AI.
There are areas that we need to worry, though. Because if we look at AI now, there are lots of problems that we haven't been able to solve. I've been building artificial intelligence for over 20 years, and one thing we know is that if we train AI on human data, the AI adopts human biases, and we have not been able to fix that. We've seen those biases start showing up in generative AI, and the gut reaction is always, well, let's just put in some guardrails to stop the AI from doing the biased thing. But one, that never fixes the bias because the AI finds a way around it. And two, the guardrails themselves can cause problems.
So Google has an AI image generator, and they tried to put guardrails in place to stop the bias in the results. And it turned out it made it wrong. This is a request for a picture of the signing of the Declaration of Independence. And it's a great picture, but it is not factually correct. And so in trying to stop the bias, we end up creating more reliability problems. We haven't been able to solve this problem of bias. And if we're thinking about deferring decision making, replacing human decision makers and relying on this technology and we can't solve this problem, that's a thing that we should worry about and demand solutions to before it's just widely adopted and employed because it's sexy.
And I think there's one final thing that's missing here, which is our human intelligence is not defined by our productivity at work. At its core, it's defined by our ability to connect with other people. Our ability to have emotional responses, to take our past and integrate it with new information and creatively come up with new things. And that’s something that artificial intelligence is not now nor will it ever be capable of doing. It may be able to imitate it and give us a cheap facsimile of genuine connection and empathy and creativity. But it can't do those core things to our humanity. And that's why I'm not really worried about AGI taking over civilization.
But if you come away from this disbelieving everything I have told you, and right now you're worried about humanity being destroyed by AI overlords, the one thing to remember is, despite what the movies have told you, if it gets really bad, we still can always just turn it off.
(Laughter)
Thank you.
(Applause)