No matter who you are or where you live, I'm guessing that you have at least one relative that likes to forward those emails. You know the ones I'm talking about -- the ones with dubious claims or conspiracy videos. And you've probably already muted them on Facebook for sharing social posts like this one.
Bez obzira na to ko ste ili gde živite, pretpostavljam da imate barem jednog rođaka koji voli da prosleđuje one mejlove. Znate o kojima govorim - one sa sumnjivim tvrdnjama ili video-snimcima zavera. I verovatno ste ih već blokirali na Fejsbuku zbog toga što dele objave poput ove.
It's an image of a banana with a strange red cross running through the center. And the text around it is warning people not to eat fruits that look like this, suggesting they've been injected with blood contaminated with the HIV virus. And the social share message above it simply says, "Please forward to save lives." Now, fact-checkers have been debunking this one for years, but it's one of those rumors that just won't die. A zombie rumor. And, of course, it's entirely false.
Ovo je slika banane kroz čiju sredinu ide čudan crveni krst. A tekst oko slike upozorava ljude da ne jedu voće koje ovako izgleda, jer je navodno u njega ubrizgana krv zaražena virusom HIV-a. A poruka iznad jednostavno kaže: „Molim vas, prosledite da biste spasili živote.“ Ljudi koji se bave proveravanjem činjenica opovrgavaju to godinama, ali ovo je jedna od onih glasina koja jednostavno ne nestaje. Zombi glasina. I naravno, potpuno je lažna.
It might be tempting to laugh at an example like this, to say, "Well, who would believe this, anyway?" But the reason it's a zombie rumor is because it taps into people's deepest fears about their own safety and that of the people they love. And if you spend as enough time as I have looking at misinformation, you know that this is just one example of many that taps into people's deepest fears and vulnerabilities.
Možda vas vuče da se nasmejete ovakvom primeru i kažete: „Ko bi uopšte poverovao u to?“ Ali ovo je zombi glasina zbog toga što zadire u najdublje ljudske strahove o ličnoj bezbednosti, kao i o bezbednosti onih koje vole. I ako posmatrate dezinformacije onoliko dugo koliko ih ja posmatram, znate da je ovo samo jedan od mnogih primera koji zadiru u najdublje ljudske strahove i osetljivosti.
Every day, across the world, we see scores of new memes on Instagram encouraging parents not to vaccinate their children. We see new videos on YouTube explaining that climate change is a hoax. And across all platforms, we see endless posts designed to demonize others on the basis of their race, religion or sexuality.
Svakog dana širom sveta vidimo gomile novih memova na Instagramu koji podstiču roditelje da ne vakcinišu svoju decu. Na Jutjubu vidimo snimke koji govore da su klimatske promene prevara. Na svim platformama viđamo objave koje su napravljene da demonizuju druge na osnovu njihove rase, religije ili seksualnosti.
Welcome to one of the central challenges of our time. How can we maintain an internet with freedom of expression at the core, while also ensuring that the content that's being disseminated doesn't cause irreparable harms to our democracies, our communities and to our physical and mental well-being? Because we live in the information age, yet the central currency upon which we all depend -- information -- is no longer deemed entirely trustworthy and, at times, can appear downright dangerous. This is thanks in part to the runaway growth of social sharing platforms that allow us to scroll through, where lies and facts sit side by side, but with none of the traditional signals of trustworthiness.
Dobro došli u jedan od glavnih izazova našeg vremena. Kako da održimo internet koji u srži nosi slobodu govora, dok u isto vreme vodimo računa da sadržaj koji se plasira ne uzrokuje nepopravljivu štetu našim demokratijama, zajednicama, i našem fizičkom i mentalnom blagostanju? Živimo u informacijskom dobu, a ipak, glavna valuta od koje svi zavisimo - informacija - se više ne smatra vrednom poverenja i ponekad deluje potpuno opasno. To je delom zbog neobuzdanog rasta društvenih platformi koje nam omogućavaju da listamo, gde se laži i činjenice nalaze jedne pored drugih, bez ikakvih uvreženih znakova sigurnosti.
And goodness -- our language around this is horribly muddled. People are still obsessed with the phrase "fake news," despite the fact that it's extraordinarily unhelpful and used to describe a number of things that are actually very different: lies, rumors, hoaxes, conspiracies, propaganda. And I really wish we could stop using a phrase that's been co-opted by politicians right around the world, from the left and the right, used as a weapon to attack a free and independent press.
I zaboga, i naš jezik oko ovoga je neverovatno haotičan. Ljudi su još uvek opsednuti frazom „lažne vesti“, uprkos činjenici da ona uopšte ne pomaže i koristi se da objasni nekoliko stvari koje su zapravo veoma različite: laži, tračeve, prevare, zavere, propagandu. I volela bih da prestanemo da koristimo frazu koju su političari širom sveta prisvojili, i levica i desnica, i koriste je kao oružje da napadaju slobodnu i nezavisnu štampu.
(Applause)
(Aplauz)
Because we need our professional news media now more than ever. And besides, most of this content doesn't even masquerade as news. It's memes, videos, social posts. And most of it is not fake; it's misleading. We tend to fixate on what's true or false. But the biggest concern is actually the weaponization of context. Because the most effective disinformation has always been that which has a kernel of truth to it.
Sada su nam profesionalni mediji potrebniji nego ikad. Osim toga, većina ovih sadržaja se ne i pretvara da su vesti. To su memovi, snimci, društvene objave. I većina nije lažna nego obmanjujuća. Imamo tendenciju da se uhvatimo za lažno ili istinito. Ali najveća briga je zapravo pretvaranje konteksta u oružje. Jer najubedljivija dezinformacija je oduvek bila ona koja u sebi sadrži zrno istine.
Let's take this example from London, from March 2017, a tweet that circulated widely in the aftermath of a terrorist incident on Westminster Bridge. This is a genuine image, not fake. The woman who appears in the photograph was interviewed afterwards, and she explained that she was utterly traumatized. She was on the phone to a loved one, and she wasn't looking at the victim out of respect. But it still was circulated widely with this Islamophobic framing, with multiple hashtags, including: #BanIslam. Now, if you worked at Twitter, what would you do? Would you take that down, or would you leave it up? My gut reaction, my emotional reaction, is to take this down. I hate the framing of this image. But freedom of expression is a human right, and if we start taking down speech that makes us feel uncomfortable, we're in trouble.
Uzmimo ovaj primer iz Londona, iz marta 2017. godine. Proširio se jedan tvit nakon terorističkog incidenta na mostu Vestminster. Ovo je prava fotografija, nije lažna. Žena koja se pojavljuje na slici je kasnije intervjuisana i objasnila je da bila potpuno traumirana. Razgovarala je sa voljenom osobom i nije gledala u žrtvu iz poštovanja. Ali ipak je tvit proširen u ovom kontekstu islamofobije, sa mnogim heštegovima, uključujući #BanIslam. Sad, da radite u Tviteru, šta biste učinili? Da li biste to skinuli ili ostavili? Moja prva, emocionalna reakcija je da se to skine. Mrzim kako je ova slika predstavljena. Ali sloboda izražavanja je ljudsko pravo i ako počnemo da uklanjamo sadržaje zbog kojih se osećamo neprijatno, bićemo u nevolji.
And this might look like a clear-cut case, but, actually, most speech isn't. These lines are incredibly difficult to draw. What's a well-meaning decision by one person is outright censorship to the next. What we now know is that this account, Texas Lone Star, was part of a wider Russian disinformation campaign, one that has since been taken down. Would that change your view? It would mine, because now it's a case of a coordinated campaign to sow discord. And for those of you who'd like to think that artificial intelligence will solve all of our problems, I think we can agree that we're a long way away from AI that's able to make sense of posts like this.
Ovo možda izgleda kao jasan slučaj, ali većina govora to nije. Veoma je teško postaviti granice. Ono što je za jednu osobu dobronamerna odluka, za drugu je direktna cenzura. Sada znamo da je ovaj nalog, Texas Lone Star, bio deo šire ruske kampanje dezinformisanja, koja je od tada skinuta. Da li bi to promenilo vaše mišljenje? Promenilo bi moje, jer sad je to slučaj organizovane kampanje da se napravi razdor. Za vas koji mislite da će veštačka inteligencija rešiti sve naše probleme, mislim da možemo da se složimo da smo daleko od veštačke inteligencije koja može da razume ovakve objave.
So I'd like to explain three interlocking issues that make this so complex and then think about some ways we can consider these challenges. First, we just don't have a rational relationship to information, we have an emotional one. It's just not true that more facts will make everything OK, because the algorithms that determine what content we see, well, they're designed to reward our emotional responses. And when we're fearful, oversimplified narratives, conspiratorial explanations and language that demonizes others is far more effective. And besides, many of these companies, their business model is attached to attention, which means these algorithms will always be skewed towards emotion.
Želim da objasnim tri međusobno prepletena problema koji ovo čine tako složenim i onda da razmotrimo načine na koje možemo misliti o ovim izazovima. Prvo, mi jednostavno nemamo racionalan odnos sa informacijama, taj odnos je emotivan. Nije istina da će više činjenica popraviti sve, jer algoritmi koji određuju koji sadržaj vidimo, pa, oni su programirani da nagrade naše emotivne odgovore. A kada se plašimo, pojednostavljene priče, zaverenička objašnjenja i jezik koji demonizuje druge su mnogo učinkovitiji. Osim toga, za mnoge od ovih kompanija je poslovni model povezan sa pažnjom, što znači da će algoritmi uvek biti naklonjeni emocijama.
Second, most of the speech I'm talking about here is legal. It would be a different matter if I was talking about child sexual abuse imagery or content that incites violence. It can be perfectly legal to post an outright lie. But people keep talking about taking down "problematic" or "harmful" content, but with no clear definition of what they mean by that, including Mark Zuckerberg, who recently called for global regulation to moderate speech. And my concern is that we're seeing governments right around the world rolling out hasty policy decisions that might actually trigger much more serious consequences when it comes to our speech. And even if we could decide which speech to take up or take down, we've never had so much speech. Every second, millions of pieces of content are uploaded by people right around the world in different languages, drawing on thousands of different cultural contexts. We've simply never had effective mechanisms to moderate speech at this scale, whether powered by humans or by technology.
Drugo, većina govora o kojima govorim ovde su legalni. Bilo bi drugačije da govorim o slikama seksualnog zlostavljanja dece, ili o sadržaju koji podstiče nasilje. Može biti savršeno legalno objaviti potpunu laž. Ali ljudi govore o uklanjanju „problematičnog“ ili „štetnog“ sadržaja, bez jasne definicije šta pod tim podrazumevaju, uključujući i Marka Zakerberga, koji je nedavno zatražio globalnu regulaciju za kontrolisanje javnog govora. Mene zabrinjava što vidimo vlade širom sveta kako donose ishitrene odluke koje će možda pokrenuti mnogo ozbiljnije posledice kada se radi o našem govoru. Čak i kada bismo mogli da odlučimo kakav govor da skinemo ili ostavimo, nikada nije bilo toliko javnog govora. Svake sekunde, širom sveta, ljudi objavljuju milione sadržaja na različitim jezicima, u različitim kulturnim kontekstima. Jednostavno nikad nismo imali dobre mehanizme da proveravamo toliko sadržaja, bilo da ga kreiraju ljudi ili tehnologija.
And third, these companies -- Google, Twitter, Facebook, WhatsApp -- they're part of a wider information ecosystem. We like to lay all the blame at their feet, but the truth is, the mass media and elected officials can also play an equal role in amplifying rumors and conspiracies when they want to. As can we, when we mindlessly forward divisive or misleading content without trying. We're adding to the pollution.
I treće, ove kompanije - Gugl, Tviter, Fejsbuk, Votsap - one su deo šireg informacionog ekosistema. Volimo da ih okrivimo za sve, ali zapravo, masovni mediji i odabrani političari mogu imati jednaku ulogu u podsticanju glasina i zavera kada to žele. Kao i mi, kada bez razmišljanja prosledimo obmanjujuć ili razdoran sadržaj bez truda. Dodajemo zagađenju.
I know we're all looking for an easy fix. But there just isn't one. Any solution will have to be rolled out at a massive scale, internet scale, and yes, the platforms, they're used to operating at that level. But can and should we allow them to fix these problems? They're certainly trying. But most of us would agree that, actually, we don't want global corporations to be the guardians of truth and fairness online. And I also think the platforms would agree with that. And at the moment, they're marking their own homework. They like to tell us that the interventions they're rolling out are working, but because they write their own transparency reports, there's no way for us to independently verify what's actually happening.
Znam da svi tražimo jednostavno rešenje. Ali takvo ne postoji. Bilo kakvo rešenje će morati da se primeni na ogromnom nivou, na nivou interneta, i da, platforme su navikle da rade na tom nivou. Ali da li možemo i da li bi trebalo da im dozvolimo da reše ove probleme? Svakako da pokušavaju. Ali većina nas bi se složila da zapravo ne želimo da globalne korporacije budu čuvari istine i pravde na internetu. Mislim da bi se i platforme složile sa tim. A trenutno oni sami sebe ocenjuju. Vole da nam govore kako njihove intervencije funkcionišu, ali oni pišu svoje izveštaje o transparentnosti, pa mi ne možemo nezavisno da proverimo šta se zapravo dešava.
(Applause)
(Aplauz)
And let's also be clear that most of the changes we see only happen after journalists undertake an investigation and find evidence of bias or content that breaks their community guidelines. So yes, these companies have to play a really important role in this process, but they can't control it.
I neka bude jasno da se većina promena koje vidimo dešava tek pošto novinari pokrenu istrage i pronađu dokaze o pristrasnosti ili sadržaj koji krši njihove pravilnike. Dakle, da, ove kompanije moraju imati važnu ulogu u ovom procesu, ali ne mogu da ga kontrolišu.
So what about governments? Many people believe that global regulation is our last hope in terms of cleaning up our information ecosystem. But what I see are lawmakers who are struggling to keep up to date with the rapid changes in technology. And worse, they're working in the dark, because they don't have access to data to understand what's happening on these platforms. And anyway, which governments would we trust to do this? We need a global response, not a national one.
A šta je sa vladama? Mnogi veruju da je globalna regulacija naša poslednja nada, u smislu raščišćavanja našeg informacionog ekosistema. Ali ja vidim zakonodavce koji se muče da budu u toku sa brzim promenama u tehnologiji. I još gore, oni rade u mraku, jer nemaju pristup podacima da bi razumeli šta se dešava na ovim platformama. Uostalom, kojim vladama bismo verovali da ovo rade? Potreban nam je globalni, a ne nacionalni dogovor.
So the missing link is us. It's those people who use these technologies every day. Can we design a new infrastructure to support quality information? Well, I believe we can, and I've got a few ideas about what we might be able to actually do. So firstly, if we're serious about bringing the public into this, can we take some inspiration from Wikipedia? They've shown us what's possible. Yes, it's not perfect, but they've demonstrated that with the right structures, with a global outlook and lots and lots of transparency, you can build something that will earn the trust of most people. Because we have to find a way to tap into the collective wisdom and experience of all users. This is particularly the case for women, people of color and underrepresented groups. Because guess what? They are experts when it comes to hate and disinformation, because they have been the targets of these campaigns for so long. And over the years, they've been raising flags, and they haven't been listened to. This has got to change. So could we build a Wikipedia for trust? Could we find a way that users can actually provide insights? They could offer insights around difficult content-moderation decisions. They could provide feedback when platforms decide they want to roll out new changes.
Karika koja nedostaje smo mi. Ljudi koji svakog dana koriste te tehnologije. Da li možemo da napravimo novu infrastrukturu koja će da podrži kvalitetne informacije? Pa, ja verujem da možemo, i imam nekoliko ideja šta bismo mogli da uradimo. Prvo, ako ozbiljno želimo da u ovo uključimo javnost, da li možemo da uzmemo Vikipediju kao inspiraciju? Oni su pokazali šta je moguće. Da, nije savršeno, ali pokazali su da sa pravim strukturama, sa globalnim stavom i mnogo, mnogo transparentnosti, možete da napravite nešto čemu će mnogo ljudi verovati. Moramo da pronađemo način da iskoristimo kolektivnu mudrost i iskustvo svih korisnika. Ovo naročito važi za žene, ljude tamne kože, i marginalizovane grupe. Jer, znate šta? Oni su stručnjaci kad se radi o mržnji i dezinformacijama, jer oni su već dugo mete ovih kampanja. Tokom godina skreću pažnju, ali niko ih ne sluša. To mora da se promeni. Da li bismo mogli da napravimo Vikipediju za poverenje? Da li možemo da pronađemo način da korisnici daju predloge? Mogli bi da daju predloge u vezi sa teškim odlukama o upravljanju sadržajem. Mogli bi da daju povratnu informaciju kada platforme odluče da unesu promene.
Second, people's experiences with the information is personalized. My Facebook news feed is very different to yours. Your YouTube recommendations are very different to mine. That makes it impossible for us to actually examine what information people are seeing. So could we imagine developing some kind of centralized open repository for anonymized data, with privacy and ethical concerns built in? Because imagine what we would learn if we built out a global network of concerned citizens who wanted to donate their social data to science. Because we actually know very little about the long-term consequences of hate and disinformation on people's attitudes and behaviors. And what we do know, most of that has been carried out in the US, despite the fact that this is a global problem. We need to work on that, too.
Drugo, iskustva ljudi sa informacijama su lična. Moje vesti na Fejsbuku su mnogo drugačije od vaših. Vaši predlozi na Jutjubu su veoma drugačiji od mojih. To nam onemogućava da pregledamo koje informacije ljudi vide. Da li bismo mogli da zamislimo da razvijemo nekakvo centralizovano otvoreno skladište anonimnih podataka u koje su već ugrađene privatnost i etičke norme? Zamislite šta bismo mogli da saznamo ako bismo izgradili globalnu mrežu zainteresovanih građana koji žele da doniraju svoje društvene podatke nauci. Jer mi veoma malo znamo o dugoročnim posledicama mržnje i dezinformacija na stavove i ponašanja ljudi. A većina onoga što znamo je sprovedena u SAD-u, uprkos činjenici da je ovo globalni problem. Moramo i na tome da radimo.
And third, can we find a way to connect the dots? No one sector, let alone nonprofit, start-up or government, is going to solve this. But there are very smart people right around the world working on these challenges, from newsrooms, civil society, academia, activist groups. And you can see some of them here. Some are building out indicators of content credibility. Others are fact-checking, so that false claims, videos and images can be down-ranked by the platforms.
I treće, možemo li da pronađemo način da složimo kockice? Nijedan sektor, a kamoli nevladina organizacija, startap ili vlada neće ovo rešiti. Ali širom sveta postoje veoma pametni ljudi koji rade na ovim izazovima, od novinskih redakcija, civilnog društva, akademaca, grupa aktivista. Neke možete videti ovde. Neki prave indikatore kredibilnosti sadržaja. Drugi proveravaju činjenice, kako bi platforme mogle da potisnu lažne tvrdnje, snimke i slike.
A nonprofit I helped to found, First Draft, is working with normally competitive newsrooms around the world to help them build out investigative, collaborative programs. And Danny Hillis, a software architect, is designing a new system called The Underlay, which will be a record of all public statements of fact connected to their sources, so that people and algorithms can better judge what is credible. And educators around the world are testing different techniques for finding ways to make people critical of the content they consume. All of these efforts are wonderful, but they're working in silos, and many of them are woefully underfunded.
Neprofitna organizacija čije sam osnivanje pomogla, First Draft, radi sa novinskim redakcijama širom sveta koje su inače konkurenti i pomaže da izgrade istraživačke programe koji sarađuju. A Deni Hilis, softverski arhitekta, programira novi sistem koji se zove The Underlay, koji će biti baza svih javnih izjava činjenica povezanih sa izvorima, kako bi ljudi i algoritmi bolje prosudili šta je pouzdano. A edukatori širom sveta testiraju razne tehnike za nalaženje načina da ljudi kritički razmatraju sadržaj koju konzumiraju. Svi ovi napori su izuzetni, ali svi rade izolovano i mnogi nisu dovoljno finansirani.
There are also hundreds of very smart people working inside these companies, but again, these efforts can feel disjointed, because they're actually developing different solutions to the same problems.
Postoje takođe stotine veoma pametnih ljudi koji rade u ovim kompanijama, ali opet, ovi napori deluju nepovezano jer razvijaju različita rešenja za iste probleme.
How can we find a way to bring people together in one physical location for days or weeks at a time, so they can actually tackle these problems together but from their different perspectives? So can we do this? Can we build out a coordinated, ambitious response, one that matches the scale and the complexity of the problem? I really think we can. Together, let's rebuild our information commons.
Kako možemo dovesti ljude na jedno fizičko mesto na nekoliko dana ili nedelja, kako bi radili zajedno na ovim problemima, ali iz svojih drugačijih perspektiva? Možemo li to da uradimo? Da li možemo da napravimo koordinisani, ambiciozni odgovor, koji odgovara veličini i složenosti problema? Stvarno mislim da možemo. Hajde da zajedno ponovo izgradimo naše informaciono zajedničko dobro.
Thank you.
Hvala vam.
(Applause)
(Aplauz)