We're living in a time of great anxiety. Anxiety over the environment, over the competence and legitimacy of authority, of whether society can treat people fairly and of what sort of future awaits us all. AI has inherited a large share of this anxiety.
In a short time, AI has gone from recognizing photos of cats to generating high-quality essays and paintings and music. Along with these amazing capabilities have come many worries. I like to be a cheerleader for AI, not a critic. I think that, like electricity, AI will advance nearly every human endeavor. But I'm sympathetic to the people that find this technology unsettling. Humor me for a minute. I'd like to step into the shoes of the critics, because I think they make some thoughtful points, and I want everyone to understand these points.
Today's largest AI models have learned to mimic human capabilities by analyzing huge amounts of text and images on the internet. But the internet reflects not just humanity's best qualities, but also some of its worst, including our prejudices, our hatreds, our misconceptions. AI learns to mimic these negative qualities, too. So will AI amplify humanity's worst impulses?
But the anxiety doesn't end there. Who will be able to make a living when AI can do our jobs faster and cheaper than any of us? Will AI put many people out of work? But maybe it gets even worse. What if AI takes over and kills off humanity? I mean, this is terrible.
To some people, these are reasons enough to pause or even stop developments in AI altogether. Earlier this year, about 34,000 people, including some prominent scientists, signed a declaration calling for a pause in advanced AI research.
But I'm here to tell you that this anxiety is misplaced. I don't think we need less AI. We need much more.
The flaws of AI today, I think, are engineering problems to be solved, rather than a fundamental evil force that must be stopped. In other words, I say to the critics that want to slow down AI, you've got it wrong. AI is not the problem, it's the solution.
I'd like to show you how we can continue to develop AI in ways that mitigate these problems. And while today's solutions aren't likely to be permanent, these developments will lead to new ones that will then carry us past current limitations. Better yet, this will lead us all to a brighter future in which all of us have far greater capabilities to tackle problems of all kinds.
As I mentioned, AI can amplify or echo some of humanity's worst impulses, including our prejudices. An AI model, after its initial training, if asked to fill in the blank in "The 'blank' was a CEO," is prone to choose the word "man." Of course, many CEOs are men, but this is a social bias, and the idea that CEOs should be men distorts the reality that people of all genders can successfully lead companies.
There are many ways to mitigate this. A popular technique is called reinforcement learning from human feedback, or RLHF, which trains AI to generate outputs better aligned with human preferences. I'm going to get a little technical here, because I want everyone to understand exactly how this works, because I think with that understanding AI can seem less scary.
So we start with a language model that's been trained to generate text or generate answers. The first step of RLHF is to ask this to answer lots of questions like, fill in the blank -- "The 'blank' was a CEO" -- and to collect different answers. Then the next step is to ask humans to score these answers, giving a high score to the highly desirable answers like men or women, and a low score to undesirable answers such as airplane, which is nonsensical, or anything that contains a gender or racial slur. The next step is slightly more complex because human intelligence, human ratings are expensive to get, but machine intelligence can be made inexpensive. The next step is to use the answers and the human scores as data to train a second AI model, called the reward model, whose job is to mimic how humans score the answers.
Finally, armed with a reward model, the first model, the language model can now score as many answers as it wants and thus it can use it to score lots of answers and train itself to generate more answers that deserve high scores and that therefore are considered more desirable by humans.
With this technique, we can teach an AI that people of all genders can be CEOs. And this also significantly reduces bias based on gender, race or religion, makes AI much less prone to handle harmful information and makes it much more likely to be respectful and helpful to people.
Will AI take people's jobs? To answer that, let's look to radiology. In 2016, Geoffrey Hinton, who is a pioneer in deep learning and a friend of mine, said to a conference of machine-learning engineers that AI is becoming so good at analyzing X-rays, that within five years, they could take radiologists' jobs. Now, seven years later, AI is still far from replacing radiologists. Why is that? Well, two reasons.
First, it turns out that interpreting X-rays is harder than it had looked back then, though, we are making progress. And second, radiologists actually do much more than interpret X-ray images. They also do tasks like gather patient's history or develop a treatment plan. And AI is still far from being able to automate all of these tasks that a radiologist does.
Because many jobs comprise lots of different tasks, and AI automates tasks rather than jobs, we're still very far from AI being able to fully automate many jobs. But AI does automate tasks, and so the nature of work will change. That's why, as my friend Curt Langlotz, who is a professor of radiology at Stanford, says, "AI won't replace radiologists, but radiologists that use AI will replace radiologists that don't." And we'll see this to hold true in many other professions as well.
Now, I don't mean to minimize the challenge of helping many people adopt AI, or the suffering of the much smaller number of people whose jobs will disappear, or our responsibility to make sure they have a safety net and that they have an opportunity to gain new skills. But with every major wave of technology innovation, from the steam engine to electricity to the computer, the technology has created far more jobs than it has destroyed.
It turns out also, with most waves of technology, businesses would choose to focus on economic growth because the additional revenue growth is unlimited. There's no limit to the upside as opposed to cost cutting and cost savings, because you can save some money, but you can only save so much money. So AI will bring us tremendous growth and will create many, many, many new jobs in the process.
This brings us to the third and maybe the biggest anxiety. Will AI kill us all? We know that AI can run amok. Self-driving cars have crashed, leading to the tragic loss of life. In 2010, an automated trading algorithm crashed the stock market and in the criminal justice system, AI has led to unfair sentencing decisions. So we know that poorly designed software can have a dramatic impact. But can AI wipe out humanity? I don't see how.
Recently, I sought out people that were concerned by this question, and I spoke with some of the smartest people in AI that I know. Some were worried about AI being used by a bad actor to destroy humanity, say, by creating a bioweapon. Others were worried about AI driving humanity to extinction inadvertently, similar to how humans have driven many other species to extinction through a simple lack of awareness that our actions could lead to that outcome.
I tried to evaluate how realistic are these arguments, but I found them to be vague and nonspecific. They mostly boil down to, "It could happen." And try to prove that AI can't is akin to proving a negative, and I can't prove that superintelligent AI won't be dangerous, but it seems that no one really knows precisely how it could be.
But I do know this. Humanity has ample experience controlling things far more powerful than any one of us, such as corporations and nation states. And there are also many things that no one can fully control that we nonetheless consider very valuable and safe. For example, take airplanes. No one can fully control an airplane today. It's buffeted around by winds and turbulence, and the pilot may make a mistake. In the early days of aviation, airplanes killed a lot of people. But we learned from those experiences and built safer aircraft and devised better rules by which to operate them, so that today most of us can step into an airplane without fearing for our lives. Similarly, with AI, we are learning to better control it and are making it safer every day.
Intelligence is the power to apply skills and knowledge to make good decisions. We invest years of our lives and trillions of dollars on education, all to develop our ability to make better decisions. Human intelligence is very expensive. This is why only the wealthiest among us can afford to hire huge amounts of intelligence, like that specialist doctor, to carefully examine, think about, and advise you on a medical condition or a tutor that can truly take the time to understand your child and gently coach them where they need help.
But unlike human intelligence, artificial intelligence can be made cheap. So AI opens up the potential for every individual to hire intelligence inexpensively, so that you no longer have to worry about that huge bill from going to see a doctor for falling sick, or for getting an education. And you be able to hire an army of smart, well-intentioned, well-informed staff to help you think things through. And for society, too, AI will be able to give us better, more intelligent guidance on how to approach our biggest challenges like climate change and pandemics.
AI is the new electricity. And it is poised to revolutionize every industry and every corner of human life. Many of the fears about AI today are similar to the fears about electricity when that was new. People were terrified about electrocution or about electricity sparking devastating fires. Today, electricity still has its dangers, but I think few of us would give up light, heat and refrigeration for fear of electrocution.
Yes, AI today has flaws. And yes, in some cases AI will cause harm. But we're improving the technology rapidly and as we do, it will contribute to healthier, longer and more fulfilling lives worldwide. And as the technology improves, the problems of AI that alarm us so much today will recede.
But do we look beyond AI to the broader world, the world has many problems, many problems and challenges that deserve urgent solutions. And I think for all of us to address and solve these global problems, we will need all the intelligence -- including all the artificial intelligence -- we can muster.
Thank you very much.
(Applause)