I was 17 years old in November 1989 when the Berlin Wall fell. And when I was growing up, I always had the feeling that the Soviet Union would be the eternal enemy of the United States and the enlightenment ideals upon which it was founded. So it was astounding, beginning in 1990, to see the Soviet Union begin to unravel. And naively, I had this feeling that somehow or another, democracy and freedom had magically prevailed over darker forces. And some observers, most famously Francis Fukuyama, observed in 1992 that we were witnessing "the end of history." What a grand thought. But in fact, nothing could be further from the truth. Russia faced a huge problem around 1992, which was how to privatize quickly all of its 225,000 state-owned enterprises. And for help, they turned to Harvard University and an elite group of economists there led by a man named Andrei Shleifer, on the left, and overseen by Lawrence Summers. Lawrence Summers went on to become the president of Harvard University and later became the US Treasury Secretary.
And this group of economists started to privatize very quickly all of the businesses in the Soviet Union. But the problem was that they had really intimate knowledge about what was going to be privatized, by whom and when and on what terms. So they started to figure out that they were in a unique position to profit from this knowledge. So Nancy Zimmerman, who turned out to be Andrei Shleifer's wife, started a hedge fund and made massive profits based on the knowledge of what was happening in Russia at the time. And this continued with other members of this Harvard Institute for International Development, which started to really have a corruption problem because so many people involved were profiting. So in effect, while Russia invited Harvard and the United States to teach them how to set up good governance and the rule of law in a capitalist economy, what we taught them how to do was to build a kleptocracy. (Chuckles) So as this situation started to collapse in 1996, some apologies were issued - Larry Summers said he was sorry that this happened. There were some small fines issued. But there wasn't really any kind of big consequence for anyone, and really it was just - people got a slap on the wrist. So in fact, Shleifer is still teaching at Harvard. So not only did we not demonstrate how to do good governance and install a rule of law, we demonstrated how to avoid accountability when you get caught stealing. And Harvard undergraduates got the message that if you were elite enough, the rules simply didn't apply to you. So let's fast-forward to 2014. This is a map that I made of my hometown Baltimore, Maryland, in the United States. And it's a map of all the Twitter users in the city - at least all the ones I could get. And each dot represents a user, and each line represents a friend connection between them. And what you can also see on this is communities of interest that start to emerge - and each color indicates that to some extent - but you can see that people are very clearly grouped. And so I saw this and felt, gee, this is a really terrific tool for helping to improve society and understanding how society works, and urban planners could use this to, you know, increase their knowledge of how things functioned. But a different cast of characters saw the same kind of data and had different thoughts. So you've heard about Cambridge Analytica and the efforts in the United States and also here in Mexico and throughout Europe to use social media data to try to influence how politics and how society works. It's really the opposite of the vision that I had. So Russia also saw this and married the idea to their historic program of "active measures," which is really their way of creating conflict in a state that they want to try to take over or otherwise control, by injecting disinformation and exploiting the divisions that exist in society. So the Cambridge Analytica effort coupled with this effort from Russia really pulled things in this general direction and started to create serious problems throughout the West. And in fact, you know, the goal of active measures is actually to capture the economy, foreign policy, and defense. And various parties throughout Europe, both on the left and the right, were set up to try to promote this kind of ethno-fascism that really suited Russia's interest and was also compatible with the people funding Cambridge Analytica's interests. So around the beginning of 2017, some of us that were looking at this from a data perspective started to figure out what was going on. And this is a very complicated data visualization - it's not important you try to understand this fully right now. But what we were able to do here was to figure out who was connected to Cambridge Analytica, which players were funding things, and that's quite difficult when you start to consider shell corporations and those sorts of things. So this is a tool that we used to try to understand what was happening with Cambridge Analytica. And you probably had heard about in the United States, about 87 million or maybe even hundreds of millions of Facebook users' profiles were captured to try to understand personality data and friend networks and all that stuff in order to direct advertising at these folks. And the same thing was actually done here in Mexico. Cambridge Analytica partnered with a firm called Pig.gi, or "pig dot G I," to try to capture this data here in Mexico by offering free Wi-Fi access in exchange for personal data and being able to direct ads at people. And they kind of got caught in the act of doing this earlier this year in 2018. And the founder of Pig.gi, Joel Phillips, said about Cambridge Analytica, "Maybe they weren't the best people to take money from." So it's an issue here as well. Now, the question that a lot of people have is "How did this actually work?" This is a slide taken from Cambridge Analytica's website. And the way that they attempted to influence society was by assigning five different scores - this is called the "Big Five" personality profile test - to each person. So they assigned a score for openness, conscientiousness, extroversion, agreeableness, and neuroticism. And based on those scores, their estimation is that they were able to direct content that would resonate the most fully with each individual. And so Cambridge Analytica got this technique from their parent company, which is called SCL, or Strategic Communications Laboratories, and they had actually gotten their start during wartime psychological operations. So this is actually a slide from some of SCL's materials from Afghanistan, talking about how to target young unmarried males. And you can see that there is a whole bunch of parameters that they're using to try to get down on who exactly they want to reach. But the idea is to change people's behavior, and they would use techniques like this. So suppose you had a piece of beachfront property yourself, and you wanted to keep people from swimming there. You could put a sign up that says, "No Swimming - Private Property," or you could put up a sign that says, "No Swimming - Shark Sighted." And which one do you think would work better? So this was the kind of stuff that they were doing in Afghanistan and then pushing into their work in Cambridge Analytica. And so they used a technique called "dark posts," which, at the time, allowed advertisers to direct specific messages to specific personality types. So this is a sample of three different images that were used with videos that were created by the John Bolton Super PAC. John Bolton - you may recognize the name - he's now the national security advisor of the United States, which probably should cause you some concern. But anyway, they would direct specific messages to each individual personality type. So one of these might be, like, inciting fear of Islam and of immigrants. Another one might be talking about military tradition. Another one might be more about science and rationality and that kind of thing. So they figured they could resonate further by targeting these ads very specifically. And so the question is "Did any of this actually work?" And I would say that, in general, the jury is still a little bit out. But if you ask Arron Banks, who was one of the leading funders of Brexit and a lot of Cambridge Analytica's work there, he says it did. He said that they created 1 million online followers that reached upwards of 25 million most weeks. The engagement was huge, yada yada. But if you actually look at the statistics, the number of votes that it took to determine Brexit was just 650,000. So it didn't take changing very many people's behavior to actually make a difference in an election like Brexit. So a lot of us kind of think, hmm, you know, does this really add up? Like, I'm a rational person. I make decisions based on information that I receive. I think clearly. How can, you know, people whispering into my ear on the internet change my mind? And I feel you. You know, that's a perfectly valid point of view. But at the end of the day, people, when they start to operate as a collective organism, don't really act like rational individuals. They act a little bit more like ants. And that's no disrespect to ants - we love ants. But if you really want to manipulate people, you need to think of them as a collective, and you need to think about how you're going to corral them into doing what you want them to do. So naturally, to model this, I did what anybody would do, which is to create a mathematical model that simulates exactly how this might work. And so what we have here is two distinct social networks that we will call "normal." And you can see that as we inject normal kinds of news stories into those networks, they tend to travel around and even move back and forth between them without too much trouble. Everything is sort of going as you'd expect. This is sort of what I would describe how, say, like, social media worked prior to these disinformation efforts. And you can also see that the network doesn't really change very much. It might change very slowly - maybe somebody would add a new friend connection here or there - but mostly it stays kind of the same. And, you know, this can continue for basically indefinitely as long as we're injecting normal kinds of news stories into it. But when we start to inject divisive kinds of news stories, what some people might call "fake news," you can start to see some changes start to appear in the network. For one thing, some voices from the edge start to move towards the center and get more attention because they're saying things that resonate with people in a certain way. And, you know, that can continue for some time - you can see that the network doesn't really get too affected by that, but it does, you know, reveal some changes as those kinds of stories move through the network. But here's where it gets really interesting. When we start to inject stories into the network that are targeted at people that we know are going to resonate with those stories so that they can get maximum reach, we can start to affect the composition of the network very, very quickly. And so as you can see, as we keep injecting more of these targeted kinds of news stories into the network, it keeps on sort of falling apart, really, and it makes it harder for other kinds of information to move more freely. And that's really kind of what the end result of this kind of activity can tend to be - is that real, normal, good news stories have a harder time moving around. But also, crucially, we've retrained the algorithms that choose what everyone sees to think that they don't want to see stuff that they did want to see before. It's a big change. So you can see normal stories have kind of a harder time getting around now, and you've got this kind of hobbled, broken network. So you might be skeptical and say, "Well, how does this model that, you know, this guy came up with reflect what's actually happening in social media? It's just a black box, right?" Well, sure, it's just a black box. But so is Facebook, right? You know, you don't have any idea what their algorithm is doing. We don't have any ability to understand that. All we know is that we're doing it and that social media we know works in something like this way, right? So in effect, what we're doing is operating the world's largest sociological experiment that has ever been done on live subjects, and as damage occurs, we're, you know, mitigating the damage after the fact as best we can without full knowledge of what even was done in the first place. It's crazy. So, you know, you can ask also in that model, "What actually happened? Were people's minds actually changed?" Certainly we think probably behavior changed because people found out about different things and interacted with people differently. Maybe some people were more engaged, and other people were less engaged. But in reality, probably not too many people's minds were actually changed. But again, the key thing is that the algorithms that decide what everybody sees got fed bad information because of this altered behavior. So, if you think about Facebook, on July 25th, 2018, they suffered a loss of market capitalization of over $120 billion in a single day, and that was mostly due to fallout from the Cambridge Analytica scandal. Now, you know, you hear from both Facebook and Twitter - Jack Dorsey runs Twitter - that they don't need oversight, that they can figure this out on their own. They don't need to be regulated; everything's going to be OK. Just give us maybe a slap on the wrist, and we'll move on. And indeed, Sheryl Sandberg, who's the COO of Facebook - guess who her mentor was? Larry Summers at Harvard. And so, you know, you kind of have the feeling that there's this sense that if you're elite enough, the rules don't apply to you, and that Harvard and society are holding Zuckerberg the executive to a different standard than what they wanted to hold Zuckerburg the undergraduate to when he went to Harvard. And so, you know, these people - these are other executives in Silicon Valley - we have kind of a moral leadership problem in Silicon Valley. And these are not necessarily bad people - maybe Elizabeth Holmes is bad people. (Laughs) But the rest of them, some of them are my friends - I'm friends with Jack Dorsey. But there's kind of this, you know - we aren't putting people over profits. There's this huge desire to make money and not to pay attention to ethics and to people, and we're not necessarily being accountable. So I would encourage you to - whenever anybody tells you that it's the end of history and things are going to automatically take care of themselves, and we don't need regulation, and please leave us alone, and we're elite, and the rules don't apply to us - it's not true, right? There's always work that we have to do in terms of having a moral compass, of pointing our society in the right direction, of putting people ahead of profits, and, you know, it's a tremendous responsibility that we have. So all of us here, as builders, as creators of our companies and institutions, I would ask you to ask yourselves, "Are we doing what's right?" "Are we doing what's ethical?" "Are we being accountable to the people that we serve?" "Are we being naive?" And it really boils down to the question of "What kind of world do we want to live in?" Do we want to live in a world where people that are elite don't have to obey the rules? Or do we want to live in a world where there's equality and justice for everyone? I know what kind of world I want to live in. Do you? Thanks. (Applause)