Recently I told someone my work is on democracy and technology. He turned to me and said, “Wow... I’m sorry.”
(Laughter)
But I love my work. I know that we can build a world where technological marvels are directed towards people's benefit, using their input. We have gotten so used to seeing democracy as a problem to be solved. But I see democracy as a solution, not as a problem.
Democracy was once a radical political project, itself a cutting-edge social technology, a new way to answer the very question we are faced with now, in the era of artificial intelligence: how do we use our capabilities to live well together?
We are told that transformative technologies like AI are too complicated, or too risky or too important to be governed democratically. But this is precisely why they must be. If existing democracy is unequal to the task, our job is not to give up on it. Our job is to evolve it. And to use technology as an asset to help us do so.
Still, I understand his doubts. I never meant to build my life around new forms of democracy. I started out just really believing in the power of science. I was modifying DNA in my kitchen at 12, and when I got to Stanford as a computational biology major, I was converted to a new belief -- technology. I truly believed in the power of tech to change the world. Maybe, like many of you.
But I saw that the technologies that really made a difference were the ones that were built with and for the collective. Not the billions of dollars pumped into the 19th addiction-fueling social app. But the projects that combine creating something truly new with building in ways for people to access, benefit from and direct it. Instead of social media, think of the internet. Built with public resources on open standards.
This is what brought me to democracy. Technology expands what we are capable of. Democracy is how we decide what to do with that capability.
Since then, I've worked on using democracy as a solution in India, the US, the UK, Taiwan. I've worked alongside incredible collaborators to use democracy to help solve COVID, to help solve data rights. And as I'll tell you today, to help solve AI governance with policymakers around the world and cutting-edge technology companies like OpenAI and Anthropic.
How? By recognizing that democracy is still in its infancy. It is an early form of collective intelligence, a way to put together decentralized input from diverse sources and produce decisions that are better than the sum of their parts.
That’s why, when my fantastic cofounder Saffron Huang and I left our jobs at Google DeepMind and Microsoft to build new democratic governance models for transformative tech, I named our nonprofit the Collective Intelligence Project, as a nod to the ever-evolving project of building collective intelligence for collective flourishing. Since then we've done just that, building new collective intelligence models to direct artificial intelligence, to run democratic processes. And we've incorporated the voices of thousands of people into AI governance.
Here are a few of the things we've learned. First, people are willing and able to have difficult, complex conversations on nuanced topics. When we asked people about the risks of AI they were most concerned about, they didn't reach for easy answers. Out of more than 100 risks put forward, the top-cited one: overreliance on systems we don't understand. We talked to people across the country, from a veteran in the Midwest to a young teacher in the South. People were excited about the possibilities of this technology, but there were specific things they wanted to understand about what models were capable of before seeing them deployed in the world. A lot more reasonable than many of the policy conversations that we're in.
And importantly, we saw very little of the polarization we're always hearing about. On average, just a few divisive statements for hundreds of consensus statements. Even on the contentious issues of the day, like free speech or race and gender, we saw far more agreement than disagreement. Almost three quarters of people agree that AI should protect free speech. Ninety percent agree that AI should not be racist or sexist. Only around 50 percent think that AI should be funny though, so they are still contentious issues out there.
These last statistics are from our collective constitution project with Anthropic, where we retrained one of the world's most powerful language models on principles written by 1,000 representative Americans. Not AI developers or regulators or researchers at elite universities. We built on a way of training AI that relies on a written set of principles or a constitution, we asked ordinary people to cowrite this constitution, we compared it to a model that researchers had come up with.
When we started this project, I wasn't sure what to expect. Maybe the naysayers were right. AI is complicated. Maybe people wouldn't understand what we were asking them. Maybe we'd end up with something awful. But the people’s model, trained on the cowritten constitution, was just as capable and more fair than the model the researchers had come up with. People with little to no experience in AI did better than researchers, who work on this full-time, in building a fairer chatbot. Maybe I shouldn't have been surprised. As one of our participants from another process said, "They may be experts in AI, but I have eight grandchildren. I know how to pick good values."
If technology expands what we are capable of and democracy is how we decide what to do with that capability, here is early evidence that democracy can do a good job deciding. Of course, these processes aren't enough. Collective intelligence requires a broader reimagining of technology and democracy. That’s why we’re also working on co-ownership models for the data that AI is built on -- which, after all, belongs to all of us -- and using AI itself to create new and better decision-making processes. Taking advantage of the things that language models can do that humans can’t, like processing huge amounts of text input.
Our work in Taiwan has been an incredible test bed for all of this. Along with Minister Audrey Tang and the Ministry of Digital Affairs, we are working on processes to ask Taiwan's millions of citizens what they actually want to see as a future with AI. And using that input not just to legislate, but to build. Because one thing that has already come out of these processes is that people are truly excited about a public option for AI, one that is built on shared public data that is reliably safe, that allows communities to access, benefit from and adjust it to their needs. This is what the world of technology could look like. Steered by the many for the many.
I often find that we accept unnecessary trade-offs when it comes to transformative tech. We are told that we might need to sacrifice democracy for the sake of technological progress. We have no choice but to concentrate power to keep ourselves safe from possible risks. This is wrong.
It is impossible to have any one of these things -- progress, safety or democratic participation -- without the others. If we resign ourselves to only two of the three, we will end up with either centralized control or chaos. Either a few people get to decide or no one does. These are both terrible outcomes, and our work shows that there is another way. Each of our projects advanced progress, safety and democratic participation by building cutting-edge democratic AI models, by using public expertise as a way to understand diffuse risks and by imagining co-ownership models for the digital commons.
We are so far from the best collective intelligence systems we could have. If we started over on building a decision-making process for the world, what would we choose? Maybe we'd be better at separating financial power from political power. Maybe we'd create thousands of new models of corporations or bureaucracies. Maybe we'd build in the voices of natural elements or future generations.
Here's a secret. In some ways, we are always starting from scratch. New technologies usher in new paradigms that can come with new collective intelligence systems. We can create new ways of living well together if we use these brief openings for change. The story of technology and democracy is far from over. It doesn't have to be this way. Things could be unimaginably better.
As the Indian author Arundhati Roy once said, "Another world is not only possible, she is on her way. On a quiet day, I can hear her breathing." I can hear our new world breathing. One in which we shift the systems we have towards using the solution of democracy to build the worlds we want to see. The future is up to us. We have a world to win.
Thank you.
(Applause)