No matter who you are or where you live, I'm guessing that you have at least one relative that likes to forward those emails. You know the ones I'm talking about -- the ones with dubious claims or conspiracy videos. And you've probably already muted them on Facebook for sharing social posts like this one.
Bez obzira tko ste i gdje živite, nagađam da imate barem jednog rođaka koji voli prosljeđivati one poruke. Znate o kojima govorim -- onima sa sumnjivim tvrdnjama ili snimkama teorija urota. I vjerojatno ste ih već blokirali na Facebooku zbog dijeljenja ovakvih objava.
It's an image of a banana with a strange red cross running through the center. And the text around it is warning people not to eat fruits that look like this, suggesting they've been injected with blood contaminated with the HIV virus. And the social share message above it simply says, "Please forward to save lives." Now, fact-checkers have been debunking this one for years, but it's one of those rumors that just won't die. A zombie rumor. And, of course, it's entirely false.
To je slika banane s neobičnim crvenim križem koji se proteže kroz sredinu. Tekst uokolo upozorava ljude da ne jedu takvo voće, napominjući da je injektirano krvlju zagađenom virusom HIV-a. A društveno dijeljena poruka iznad jednostavno kaže, "Molimo pošaljite dalje za spas života." Stručnjaci za provjeru informacija pokušavaju razotkriti ovo godinama, ali to je jedan od tračeva koji odbija umrijeti. Zombijevski trač. I naravno, potpuno je pogrešan.
It might be tempting to laugh at an example like this, to say, "Well, who would believe this, anyway?" But the reason it's a zombie rumor is because it taps into people's deepest fears about their own safety and that of the people they love. And if you spend as enough time as I have looking at misinformation, you know that this is just one example of many that taps into people's deepest fears and vulnerabilities.
Dolazimo u napast smijati se primjerima poput ovoga, reći, "Tko bi ionako povjerovao u to?" Ali razlog zbog kojeg je to zombijevski trač jest jer zadire u najdublje ljudske strahove o osobnoj sigurnosti te sigurnosti ljudi koje vole. A ako provedete toliko vremena koliko i ja promatrajući dezinformacije, znate da je ovo tek jedan od mnogih primjera koji zadire u najdublje ljudske strahove i slabosti.
Every day, across the world, we see scores of new memes on Instagram encouraging parents not to vaccinate their children. We see new videos on YouTube explaining that climate change is a hoax. And across all platforms, we see endless posts designed to demonize others on the basis of their race, religion or sexuality.
Svakog dana, svuda u svijetu, vidimo desetke novih memeova na Instagramu koji ohrabruju roditelje da ne cijepe svoju djecu. Vidimo nove videe na YouTubeu koji tvrde da su klimatske promjene prevara. A na svim platformama, vidimo bezbrojne postove s ciljem demoniziranja drugih na osnovi njihove rase, vjere ili spolnosti.
Welcome to one of the central challenges of our time. How can we maintain an internet with freedom of expression at the core, while also ensuring that the content that's being disseminated doesn't cause irreparable harms to our democracies, our communities and to our physical and mental well-being? Because we live in the information age, yet the central currency upon which we all depend -- information -- is no longer deemed entirely trustworthy and, at times, can appear downright dangerous. This is thanks in part to the runaway growth of social sharing platforms that allow us to scroll through, where lies and facts sit side by side, but with none of the traditional signals of trustworthiness.
Dobrodošli u jedan od središnjih izazova našeg vremena. Kako da zadržimo internet sa slobodom izražavanja u svojoj srži, a da osiguramo da sadržaj kojeg se širi ne stvara nepovratnu štetu našim demokracijama, našim zajednicama te našem tjelesnom i umnom blagostanju? Jer živimo u informacijskom dobu, a ipak glavna valuta o kojoj svi ovisimo -- informacija -- više se ne smatra posve vjerodostojnom a ponekad se može činiti izravno opasnom. Ovo se događa zahvaljujući brzom rastu platformi za društveno dijeljenje koje nam omogućuju da skrolamo dolje, gdje laži i činjenice stoje jedni uz druge, ali bez ijednog od tradicionalnih znakova vjerodostojnosti.
And goodness -- our language around this is horribly muddled. People are still obsessed with the phrase "fake news," despite the fact that it's extraordinarily unhelpful and used to describe a number of things that are actually very different: lies, rumors, hoaxes, conspiracies, propaganda. And I really wish we could stop using a phrase that's been co-opted by politicians right around the world, from the left and the right, used as a weapon to attack a free and independent press.
I zaboga -- naš govor koji okružuje sve to je zastrašujuće zamagljen. Ljudi su još uvijek opsjednuti frazom "lažne vijesti", unatoč činjenici da nam izuzetno odmaže i koristi se za opis brojnih stvari koje su zapravo vrlo različite: laži, tračevi, prevare, urote, promidžba. I stvarno priželjkujem da prestanemo koristiti tu frazu koju su redom preuzeli političari gotovo cijelog svijeta, od ljevice do desnice, i koriste se njome kao oružjem za napade na slobodne i neovisne medije.
(Applause)
(Pljesak)
Because we need our professional news media now more than ever. And besides, most of this content doesn't even masquerade as news. It's memes, videos, social posts. And most of it is not fake; it's misleading. We tend to fixate on what's true or false. But the biggest concern is actually the weaponization of context. Because the most effective disinformation has always been that which has a kernel of truth to it.
Jer sada više nego ikada trebaju nam naši profesionalni informativni mediji. Osim toga, većina takvog sadržaja se niti ne prikriva pod viješću. To su memeovi, videi, društveni postovi. I većina toga nije lažna, nego krivo navodi. Mi se većinom fiksiramo na istinu ili laž. Ali najveća opasnost je zapravo pretvaranje konteksta u oružje. Jer najučinkovitija dezinformacija oduvijek je bila ona koja u sebi sadrži zrnce istine.
Let's take this example from London, from March 2017, a tweet that circulated widely in the aftermath of a terrorist incident on Westminster Bridge. This is a genuine image, not fake. The woman who appears in the photograph was interviewed afterwards, and she explained that she was utterly traumatized. She was on the phone to a loved one, and she wasn't looking at the victim out of respect. But it still was circulated widely with this Islamophobic framing, with multiple hashtags, including: #BanIslam. Now, if you worked at Twitter, what would you do? Would you take that down, or would you leave it up? My gut reaction, my emotional reaction, is to take this down. I hate the framing of this image. But freedom of expression is a human right, and if we start taking down speech that makes us feel uncomfortable, we're in trouble.
Uzmimo ovaj primjer iz Londona iz ožujka 2017. godine, objava na Twitteru koja se jako proširila u danima nakon terorističkog napada na Westminsterskom mostu. Ovo je prava slika, nije lažirana. Žena koja se pojavljuje na slici kasnije je bila intervjuirana i objasnila je da je bila krajnje preplašena. Bila je na telefonu s voljenom osobom i iz poštovanja nije gledala u žrtvu. Ali svejedno je proširena uz islamofobne implikacije, uz brojne hashtagove, uključujući: #ZabraniteIslam. Da radite u Twitteru, što biste učinili? Biste li ovo maknuli ili ostavili? Moja instinktivna, emocionalna reakcija jest da bih to maknula. Mrzim implikacije ove slike. Ali sloboda izražavanja je ljudsko pravo, a ako počnemo micati govor koji čini da se osjećamo nelagodno, onda smo u nevolji.
And this might look like a clear-cut case, but, actually, most speech isn't. These lines are incredibly difficult to draw. What's a well-meaning decision by one person is outright censorship to the next. What we now know is that this account, Texas Lone Star, was part of a wider Russian disinformation campaign, one that has since been taken down. Would that change your view? It would mine, because now it's a case of a coordinated campaign to sow discord. And for those of you who'd like to think that artificial intelligence will solve all of our problems, I think we can agree that we're a long way away from AI that's able to make sense of posts like this.
Ovo može izgledati kao kristalno jasan slučaj, ali zapravo, većina govora nije takva. Izuzetno je teško povući te crte. Ono što je za jednu osobu dobronamjerna odluka, za drugu može biti otvorena cenzura. Sada znamo da je ovaj Twitter-račun, Texas Lone Star, dio široke ruske kampanje dezinformiranja, koja je otada odstranjena. Bi li to promijenilo vaš stav? Moj bi, jer sada je to slučaj usmjerene kampanje za sijanje nemira. A vama koji volite misliti kako će umjetna inteligencija riješiti sve naše probleme, mislim da se možemo složiti kako je još dug put do UI-ja koji bi mogao razlučiti smisao ovakvih postova.
So I'd like to explain three interlocking issues that make this so complex and then think about some ways we can consider these challenges. First, we just don't have a rational relationship to information, we have an emotional one. It's just not true that more facts will make everything OK, because the algorithms that determine what content we see, well, they're designed to reward our emotional responses. And when we're fearful, oversimplified narratives, conspiratorial explanations and language that demonizes others is far more effective. And besides, many of these companies, their business model is attached to attention, which means these algorithms will always be skewed towards emotion.
Želim objasniti tri isprepletena problema koji ovo čine toliko složenim, a zatim razmisliti o načinima kako možemo razmotriti ove izazove. Prvo, mi jednostavno nemamo racionalan odnos s informacijama, nego emocionalan. Jednostavno nije točno da će više činjenica sve popraviti, jer algoritmi koji određuju koji sadržaj gledamo napravljeni su tako da nagrade naše emocionalne reakcije. A kada strahujemo, pojednostavljene priče, urotnička objašnjenja i govor koji demonizira druge su daleko učinkovitiji. Usput, mnoge od ovih tvrtki, njihov poslovni model veže se na pozornost, što znači da će ti algoritmi uvijek biti naklonjeni prema emocijama.
Second, most of the speech I'm talking about here is legal. It would be a different matter if I was talking about child sexual abuse imagery or content that incites violence. It can be perfectly legal to post an outright lie. But people keep talking about taking down "problematic" or "harmful" content, but with no clear definition of what they mean by that, including Mark Zuckerberg, who recently called for global regulation to moderate speech. And my concern is that we're seeing governments right around the world rolling out hasty policy decisions that might actually trigger much more serious consequences when it comes to our speech. And even if we could decide which speech to take up or take down, we've never had so much speech. Every second, millions of pieces of content are uploaded by people right around the world in different languages, drawing on thousands of different cultural contexts. We've simply never had effective mechanisms to moderate speech at this scale, whether powered by humans or by technology.
Drugo, većina govora koji ovdje opisujem je legalan. Bila bi to druga stvar ako bih govorila o slikama spolnog zlostavljanja djece ili o sadržaju koji poziva na nasilje. Može biti posve legalno objaviti besramnu laž. A ljudi i dalje pričaju o skidanju "problematičnog" ili "štetnog" sadržaja, ali bez jasne definicije što pod tim smatraju, čak i Mark Zuckerberg, koji je nedavno zatražio svjetsku regulativu za moderiranje govora. Vidim opasnost u tome što vlade po cijelom svijetu izbacuju žurne političke odluke koje bi zapravo mogle izazvati puno ozbiljnije posljedice kada se radi o našem govoru. Čak i ako bismo mogli odlučiti koji govor postaviti ili maknuti, nikada nije bilo toliko puno govora. Svake sekunde, milijune komadića sadržaja postavljaju ljudi iz cijelog svijeta na različitim jezicima, pozivajući se na tisuće različitih kulturnih konteksta. Nikada nismo imali učinkovita sredstva za moderiranje govora na ovoj razini, bilo da ih pokreću ljudi ili tehnologija.
And third, these companies -- Google, Twitter, Facebook, WhatsApp -- they're part of a wider information ecosystem. We like to lay all the blame at their feet, but the truth is, the mass media and elected officials can also play an equal role in amplifying rumors and conspiracies when they want to. As can we, when we mindlessly forward divisive or misleading content without trying. We're adding to the pollution.
Treće, ove tvrtke -- Google, Twitter, Facebook, WhatsApp -- dio su šireg informacijskog ekosustava. Sviđa nam se svaliti sav grijeh na njih, ali zapravo, masovni mediji i dužnosnici na položajima također igraju podjednaku ulogu u preuveličavanju tračeva i zavjera kada to žele. Tu ulogu igramo i sami kada proslijedimo huškajući ili neprovjeren sadržaj a da se nismo ni potrudili. Pridonosimo zagađenju.
I know we're all looking for an easy fix. But there just isn't one. Any solution will have to be rolled out at a massive scale, internet scale, and yes, the platforms, they're used to operating at that level. But can and should we allow them to fix these problems? They're certainly trying. But most of us would agree that, actually, we don't want global corporations to be the guardians of truth and fairness online. And I also think the platforms would agree with that. And at the moment, they're marking their own homework. They like to tell us that the interventions they're rolling out are working, but because they write their own transparency reports, there's no way for us to independently verify what's actually happening.
Znam da svi tražimo lako rješenje. Ali jednostavno ga nema. Bilo koje rješenje trebalo bi biti pokrenuto na visokoj razini, internetskoj, i da, platforme su naviknute raditi na toj razini. Ali možemo li i trebamo li im dopustiti da riješe ovaj problem? Oni se svakako trude. Ali većina nas će se složiti da, zapravo, ne želimo da globalne korporacije postanu čuvari istine i poštenja na internetu. Mislim da bi se i same platforme složile s time. A trenutno, oni ocjenjuju svoje vlastite domaće zadaće. Vole nam reći kako intervencije koje poduzimaju imaju učinak, ali budući da sami pišu svoja izvješća o transparentnosti, nema načina da neovisno utvrdimo što se zapravo događa.
(Applause)
(Pljesak)
And let's also be clear that most of the changes we see only happen after journalists undertake an investigation and find evidence of bias or content that breaks their community guidelines. So yes, these companies have to play a really important role in this process, but they can't control it.
Razjasnimo također kako se većina promjena koje vidimo dogode tek nakon što novinari poduzmu istragu te pronađu dokaze pristranosti ili sadržaja koji krši njihove društvene smjernice. Da, te tvrtke moraju igrati vrlo važnu ulogu u ovom procesu, ali ga ne smiju kontrolirati.
So what about governments? Many people believe that global regulation is our last hope in terms of cleaning up our information ecosystem. But what I see are lawmakers who are struggling to keep up to date with the rapid changes in technology. And worse, they're working in the dark, because they don't have access to data to understand what's happening on these platforms. And anyway, which governments would we trust to do this? We need a global response, not a national one.
Što je s vladama? Mnogi ljudi vjeruju da je svjetska regulativa posljednja nada kada se radi o pročišćenju našeg informacijskog ekosustava. Ali vidim tek donositelje zakona koji se bore da ostanu ukorak s brzim promjenama u tehnologiji. Još gore, oni djeluju u mraku, jer nemaju pristup podacima kako bi razumjeli što se događa na ovim platformama. A i kojim bismo vladama povjerili da to učine? Potrebna nam je svjetska, ne nacionalna, reakcija.
So the missing link is us. It's those people who use these technologies every day. Can we design a new infrastructure to support quality information? Well, I believe we can, and I've got a few ideas about what we might be able to actually do. So firstly, if we're serious about bringing the public into this, can we take some inspiration from Wikipedia? They've shown us what's possible. Yes, it's not perfect, but they've demonstrated that with the right structures, with a global outlook and lots and lots of transparency, you can build something that will earn the trust of most people. Because we have to find a way to tap into the collective wisdom and experience of all users. This is particularly the case for women, people of color and underrepresented groups. Because guess what? They are experts when it comes to hate and disinformation, because they have been the targets of these campaigns for so long. And over the years, they've been raising flags, and they haven't been listened to. This has got to change. So could we build a Wikipedia for trust? Could we find a way that users can actually provide insights? They could offer insights around difficult content-moderation decisions. They could provide feedback when platforms decide they want to roll out new changes.
Mi smo karika koja nedostaje. Ljudi koji se služe ovim tehnologijama svakodnevno. Možemo li stvoriti novu infrastrukturu za podršku kvalitetnim informacijama? Vjerujem da možemo, i imam par ideja o tome što zapravo možemo učiniti. Najprije, ako ozbiljno mislimo uključiti javnost u ovo, možemo li crpiti nadahnuće iz Wikipedije? Oni su pokazali što je moguće. Nije savršena, ali prikazali su kako pomoću pravih struktura, globalnog pogleda te puno, puno transparentnosti, može se izgraditi nešto što će steći povjerenje većine ljudi. Jer moramo pronaći način da zagrabimo u kolektivnu mudrost te iskustvo svakog korisnika. To je posebno bitno za žene, nebijelce, i nedovoljno zastupljene skupine. Jer pogodite? Oni su stručnjaci kada se radi o mržnji i dezinformiranju, jer su oni toliko dugo bili mete takvih kampanja. Tijekom godina, podizali su zastavice, a nismo ih slušali. To se mora promijeniti. Možemo li sagraditi Wikipediju za povjerenje? Možemo li pronaći način da nam korisnici zapravo omoguće uvide? Mogu ponuditi uvide pri teškim odlukama o moderiranju sadržaja. Mogu nam dati povratnu informaciju kada platforme odluče izbaciti nove promjene.
Second, people's experiences with the information is personalized. My Facebook news feed is very different to yours. Your YouTube recommendations are very different to mine. That makes it impossible for us to actually examine what information people are seeing. So could we imagine developing some kind of centralized open repository for anonymized data, with privacy and ethical concerns built in? Because imagine what we would learn if we built out a global network of concerned citizens who wanted to donate their social data to science. Because we actually know very little about the long-term consequences of hate and disinformation on people's attitudes and behaviors. And what we do know, most of that has been carried out in the US, despite the fact that this is a global problem. We need to work on that, too.
Drugo, ljudska iskustva s informacijama su osobna. Moj popis novosti na Facebooku je vrlo različit od vašeg. Vaše preporuke na YouTubeu su vrlo različite od mojih. To čini nemogućim da preispitamo koje informacije ljudi vide. Možemo li zamisliti da razvijemo neku vrstu središnjeg otvorenog skladišta za anonimne podatke, s ugrađenom privatnošću i etičkim ogradama? Jer zamislite što bismo mogli naučiti ako izgradimo svjetsku mrežu angažiranih građana koji žele donirati svoje društvene podatke znanosti. Jer zapravo znamo jako malo o dugoročnim posljedicama mržnje i dezinformacija na ljudske stavove i ponašanja. A ono što znamo većinom je bilo izvedeno u SAD-u, usprkos tome što je ovo svjetski problem. I na tome moramo raditi.
And third, can we find a way to connect the dots? No one sector, let alone nonprofit, start-up or government, is going to solve this. But there are very smart people right around the world working on these challenges, from newsrooms, civil society, academia, activist groups. And you can see some of them here. Some are building out indicators of content credibility. Others are fact-checking, so that false claims, videos and images can be down-ranked by the platforms.
Treće, možemo li naći način da spojimo točke? Nijedan sektor, posebno ne neprofitna udruga, start-up tvrtka ili vlada neće ovo riješiti. Ali ima jako pametnih ljudi svuda po svijetu koji rade na ovim izazovima, uredništva vijesti, civilno društvo, akademski djelatnici, skupine aktivista. Neke od njih možete vidjeti ovdje. Neki grade pokazatelje za vjerodostojnost sadržaja. Drugi provjeravaju podatke, kako bi lažne tvrdnje, videi i slike bili nisko rangirani na platformama.
A nonprofit I helped to found, First Draft, is working with normally competitive newsrooms around the world to help them build out investigative, collaborative programs. And Danny Hillis, a software architect, is designing a new system called The Underlay, which will be a record of all public statements of fact connected to their sources, so that people and algorithms can better judge what is credible. And educators around the world are testing different techniques for finding ways to make people critical of the content they consume. All of these efforts are wonderful, but they're working in silos, and many of them are woefully underfunded.
Pomogla sam osnovati neprofitnu udrugu First Draft koja radi s uobičajeno natjecateljskim uredništvima vijesti po svijetu da im pomogne u gradnji istraživačkih, surađivačkih programa. Danny Hillis, inženjer softvera, gradi novi sustav imenom The Underlay, koji će biti arhiv svih javnih činjenica povezanih sa svojim izvorom, kako bi ljudi i algoritmi bolje procijenili što je vjerodostojno. Edukatori diljem svijeta testiraju različite tehnike u nalaženju načina da ljudi postanu kritični prema sadržaju koji koriste. Svi ti pokušaji su divni, ali oni rade nepovezano, i mnogima od njih kronično nedostaju financijska sredstva.
There are also hundreds of very smart people working inside these companies, but again, these efforts can feel disjointed, because they're actually developing different solutions to the same problems.
Ima i stotine vrlo pametnih ljudi koji rade unutar ovih tvrtki, ali opet, ta nastojanja su rascjepkana, jer oni zapravo razvijaju različita rješenja za isti problem.
How can we find a way to bring people together in one physical location for days or weeks at a time, so they can actually tackle these problems together but from their different perspectives? So can we do this? Can we build out a coordinated, ambitious response, one that matches the scale and the complexity of the problem? I really think we can. Together, let's rebuild our information commons.
Kako možemo naći način da spojimo ljude na jednoj fizičkoj lokaciji danima ili tjednima istodobno, kako bi zajedno mogli savladati ove probleme ali s različitih gledišta? Pa možemo li to učiniti? Možemo li izgraditi koordiniranu, ambicioznu reakciju koja će se mjeriti s razinom i kompleksnošću problema? Stvarno mislim da možemo. Zajedno, izgradimo ponovno naš informacijski krajolik.
Thank you.
Hvala.
(Applause)
(Pljesak)