Competition. It's a fundamental part of human nature. I was a professional poker player for 10 years, so I've very much seen all the good, bad and ugly ways it can manifest. When it's done right, it can drive us to incredible feats in sports and innovation, like when car companies compete over who can build the safest cars or the most efficient solar panels. Those are all examples of healthy competition, because even though individual companies might come and go, in the long run, the game between them creates win-win outcomes where everyone benefits in the end.
But sometimes competition is not so great and can create lose-lose outcomes where everyone's worse off than before. Take these AI beauty filters, for example. As you can see, they're a very impressive technology. They can salvage almost any picture. They can even make Angelina and Margot more beautiful. So they're very handy, especially for influencers who, now, at the click of a button, can transform into the most beautiful Hollywood versions of themselves. But handy doesn't always mean healthy. And I've personally noticed how quickly these things can train you to hate your natural face. And there's growing evidence that they're creating issues like body dysmorphia, especially in young people.
Nonetheless, these things are now endemic to social media because the nature of the game demands it. The platforms are incentivized to provide them because hotter pictures means more hijacked limbic systems, which means more scrolling and thus more ad revenue. And users are incentivized to use them because hotter pictures get you more followers.
But this is a trap, because once you start using these things, it's really hard to go back. Plus, you don't even get a competitive advantage from them anymore because everyone else is already using them too. So influencers are stuck using these things with all the downsides and very little upside. A lose-lose game.
A similar kind of trap is playing out in our news media right now, but with much worse consequences. You'd think since the internet came along that the increased competition between news outlets would create a sort of positive spiral, like a race to the top of nuanced, impartial, accurate journalism. Instead, we're seeing a race to the bottom of clickbait and polarization, where even respectable papers are increasingly leaning into these kind of low-brow partisan tactics. Again, this is due to crappy incentives.
Today, we no longer just read our news. We interact with it by sharing and commenting. And headlines that trigger emotions like fear or anger are far more likely to go viral than neutral or positive ones. So in many ways, news editors are in a similar kind of trap as the influencers, where, the more their competitors lean into clickbaity tactics, the more they have to as well. Otherwise, their stories just get lost in the noise. But this is terrible for everybody, because now the media get less trust from the public, but also it becomes harder and harder for anyone to discern truth from fiction, which is a really big problem for democracy.
Now, this process of competition gone wrong is actually the driving force behind so many of our biggest issues. Plastic pollution, deforestation, antibiotic overuse in farming, arms races, greenhouse gas emissions. These are all a result of crappy incentives, of poorly designed games that push their players -- be them people, companies or governments -- into taking strategies and tactics that defer costs and harms to the future. And what's so ridiculous is that most of the time, these guys don't even want to be doing this. You know, it's not like packaging companies want to fill the oceans with plastic or farmers want to worsen antibiotic resistance. But they’re all stuck in the same dilemma of: "If I don't use this tactic, I’ll get outcompeted by all the others who do. So I have to do it, too.”
This is the mechanism we need to fix as a civilization. And I know what you're probably all thinking, "So it's capitalism." No, it's not capitalism. Which, yes, can cause problems, but it can also solve them and has been fantastic in general. It's something much deeper. It's a force of misaligned incentives of game theory itself.
So a few years ago, I retired from poker, in part because I wanted to understand this mechanism better. Because it takes many different forms, and it goes by many different names. These are just some of those names. You can see they're a little bit abstract and clunky, right? They don't exactly roll off the tongue. And given how insidious and connected all of these problems are, it helps to have a more visceral way of recognizing them.
So this is probably the only time you're going to hear about the Bible at this conference. But I want to tell you a quick story from it, because allegedly, back in the Canaanite days, there was a cult who wanted money and power so badly, they were willing to sacrifice their literal children for it. And they did this by burning them alive in an effigy of a God that they believed would then reward them for this ultimate sacrifice. And the name of this god was Moloch. Bit of a bummer, as stories go. But you can see why it's an apt metaphor, because sometimes we get so lost in winning the game right in front of us, we lose sight of the bigger picture and sacrifice too much in our pursuit of victory. So just like these guys were sacrificing their children for power, those influencers are sacrificing their happiness for likes. Those news editors are sacrificing their integrity for clicks, and polluters are sacrificing the biosphere for profit.
In all these examples, the short-term incentives of the games themselves are pushing, they're tempting their players to sacrifice more and more of their future, trapping them in a death spiral where they all lose in the end. That's Moloch's trap. The mechanism of unhealthy competition. And the same is now happening in the AI industry.
We're all aware of the race that's heating up between companies right now over who can score the most compute, who can get the biggest funding round or get the top talent. Well, as more and more companies enter this race, the greater the pressure for everyone to go as fast as possible and sacrifice other important stuff like safety testing. This has all the hallmarks of a Moloch trap. Because, like, imagine you're a CEO who, you know, in your heart of hearts, believes that your team is the best to be able to safely build extremely powerful AI. Well, if you go too slowly, then you run the risk of other, much less cautious teams getting there first and deploying their systems before you can. So that in turn pushes you to be more reckless yourself. And given how many different experts and researchers, both within these companies but also completely independent ones, have been warning us about the extreme risks of rushed AI, this approach is absolutely mad. Plus, almost all AI companies are beholden to satisfying their investors, a short-term incentive which, over time, will inevitably start to conflict with any benevolent mission.
And this wouldn't be a big deal if this was really just toasters we're talking about here. But AI, and especially AGI, is set to be a bigger paradigm shift than the agricultural or industrial revolutions. A moment in time so pivotal, it's deserving of reverence and reflection, not something to be reduced to a corporate rat race of who can score the most daily active users. I'm not saying I know what the right trade-off between acceleration and safety is, but I do know that we'll never find out what that right trade-off is if we let Moloch dictate it for us.
So what can we do? Well, the good news is we have managed to coordinate to escape some of Moloch's traps before. We managed to save the ozone layer from CFCs with the help of the Montreal Protocol. We managed to reduce the number of nuclear weapons on Earth by 80 percent, with the help of the Strategic Arms Reduction Treaty in 1991. So smart regulation may certainly help with AI too, but ultimately, it's the players within the game who have the most influence on it. So we need AI leaders to show us that they're not only aware of the risks their technologies pose, but also the destructive nature of the incentives that they're currently beholden to. As their technological capabilities reach towards the power of gods, they're going to need the godlike wisdom to know how to wield them.
So it doesn't fill me with encouragement when I see a CEO of a very major company saying something like, "I want people to know we made our competitor dance." That is not the type of mindset we need here. We need leaders who are willing to flip Moloch's playbook, who are willing to sacrifice their own individual chance of winning for the good of the whole. Now, fortunately, the three leading labs are showing some signs of doing this. Anthropic recently announced their responsible scaling policy, which pledges to only increase capabilities once certain security criteria have been met. OpenAI have recently pledged to dedicate 20 percent of their compute purely to alignment research. And DeepMind have shown a decade-long focus of science ahead of commerce, like their development of AlphaFold, which they gave away to the science community for free. These are all steps in the right direction, but they are still nowhere close to being enough. I mean, most of these are currently just words, they're not even proven actions.
So we need a clear way to turn the AI race into a definitive race to the top. Perhaps companies can start competing over who can be within these metrics, over who can develop the best security criteria. A race of who can dedicate the most compute to alignment. Now that would truly flip the middle finger to Moloch.
Competition can be an amazing tool, provided we wield it wisely. And we're going to need to do that because the stakes we are playing for are astronomical. If we get AI, and especially AGI, wrong, it could lead to unimaginable catastrophe. But if we get it right, it could be our path out of many of these Moloch traps that I've mentioned today. And as things get crazier over the coming years, which they're probably going to, it's going to be more important than ever that we remember that it is the real enemy here, Moloch. Not any individual CEO or company, and certainly not one another.
So don't hate the players, change the game.
(Applause)