My relationship with the internet reminds me of the setup to a clichéd horror movie. You know, the blissfully happy family moves in to their perfect new home, excited about their perfect future, and it's sunny outside and the birds are chirping ... And then it gets dark. And there are noises from the attic. And we realize that that perfect new house isn't so perfect.
Moja veza sa internetom me podseća na radnju predvidivog horor filma. Presrećna porodica se useljava u svoju novu kuću, uzbuđeni zbog svoje savršene budućnosti, a napolju je sunčano, pričice cvrkuću... A onda postaje mračno. I čuju se zvukovi na tavanu. Pa shvatamo da ta savršena nova kuća nije toliko savršena.
When I started working at Google in 2006, Facebook was just a two-year-old, and Twitter hadn't yet been born. And I was in absolute awe of the internet and all of its promise to make us closer and smarter and more free. But as we were doing the inspiring work of building search engines and video-sharing sites and social networks, criminals, dictators and terrorists were figuring out how to use those same platforms against us. And we didn't have the foresight to stop them. Over the last few years, geopolitical forces have come online to wreak havoc. And in response, Google supported a few colleagues and me to set up a new group called Jigsaw, with a mandate to make people safer from threats like violent extremism, censorship, persecution -- threats that feel very personal to me because I was born in Iran, and I left in the aftermath of a violent revolution. But I've come to realize that even if we had all of the resources of all of the technology companies in the world, we'd still fail if we overlooked one critical ingredient: the human experiences of the victims and perpetrators of those threats.
Kada sam počela da radim u Guglu 2006, Fejsbuk je imao samo dve godine, a Tviter još nije bio rođen. Bila sam potpuno zapanjena internetom i svim njegovim obećanjima da nas zbliži da nas načini pametnijim i slobodnijim. Ali kako smo radili inspirušući posao izrađivanja pretraživača, sajtova za razmenu video klipova i socijalnih mreža, kriminalci, diktatori i teroristi su shvatali kako da koriste te iste platforme protiv nas. Nismo imali smotrenosti da ih zaustavimo. Proteklih godina geopolitičke sile su došle na internet da izazovu propast. A kao odgovor, Gugl je podržao nekoliko kolega i mene da osnujemo novu grupu "Slagalica", sa mandatom da načinimo ljude sigurnijima od pretnji kao što su nasilni eksremizam, cenzura, progon - pretnje koje su meni veoma lične jer sam rođena u Iranu, i otišla sam tokom posledica nasilne revolucije. Shvatila sam da čak i da imamo sve resurse svih tehnoloških kompanija na svetu, i dalje bismo gubili ako bismo predvideli jedan presudan sastojak: ljudska iskustva žrtava i izvršioce tih pretnji.
There are many challenges I could talk to you about today. I'm going to focus on just two. The first is terrorism. So in order to understand the radicalization process, we met with dozens of former members of violent extremist groups. One was a British schoolgirl, who had been taken off of a plane at London Heathrow as she was trying to make her way to Syria to join ISIS. And she was 13 years old. So I sat down with her and her father, and I said, "Why?" And she said, "I was looking at pictures of what life is like in Syria, and I thought I was going to go and live in the Islamic Disney World." That's what she saw in ISIS. She thought she'd meet and marry a jihadi Brad Pitt and go shopping in the mall all day and live happily ever after.
Ima mnogo izazova o kojima bih mogla da vam pričam danas. Foksiraću se na samo dva. Prvi je terorizam. Dakle, da bismo razumeli proces radikalizacije, sastali smo se sa desetinom bivših članova ekstremističkih grupa. Jedna je bila britanska učenica, koja je bila izvedena iz aviona na Londonskom aerodromu "Hitrou" dok je pokušavala da dođe do Sirije da bi se pridružila ISIS-u. A imala je 13 godina. Sela sam sa njom i njenim ocem, i pitala sam: “Zašto?” a ona je rekla: “Gledala sam slike na kojima je prikazano kakav je život u Siriji i mislila sam da idem da živim u islamskom Dizni svetu.” To je ono što je ona videla u ISISu. Mislila je da će upoznati i venčati se sa džihadi Bredom Pitom i da će biti u tržnom centru ceo dan i živeti srećno sve do kraja života.
ISIS understands what drives people, and they carefully craft a message for each audience. Just look at how many languages they translate their marketing material into. They make pamphlets, radio shows and videos in not just English and Arabic, but German, Russian, French, Turkish, Kurdish, Hebrew, Mandarin Chinese. I've even seen an ISIS-produced video in sign language. Just think about that for a second: ISIS took the time and made the effort to ensure their message is reaching the deaf and hard of hearing. It's actually not tech-savviness that is the reason why ISIS wins hearts and minds. It's their insight into the prejudices, the vulnerabilities, the desires of the people they're trying to reach that does that. That's why it's not enough for the online platforms to focus on removing recruiting material. If we want to have a shot at building meaningful technology that's going to counter radicalization, we have to start with the human journey at its core.
ISIS razume šta pokreće ljude, oni pažljivo kreiraju poruku za svaku publiku. Samo pogledajte na koliko jezika oni prevode svoj propagandni materijal. Prave brošure, radio emisije i klipove ne samo na engleskom i arapskom, već i nemačkom, ruskom, francuskom, turskom, kurdskom, hebrejskom mandarinskom. Čak sam videla i video gde koriste znakovni jezik. Samo mislite o tome na sekundu: ISIS je odvojio vreme i uložio napor da osigura da njihova poruka dosegne i do gluvih i onih sa slabijim sluhom. Nije zapravo razumevanje u tehnologiju razlog što ISIS osvaja srca i umove. Razlog su njihovi uvidi u predrasude, ranjivosti, žudnje ljudi koje pokušavaju da dosegnu. Zato nije dovoljno za internet platforme da se fokusiraju na uklanjanje materijala za vrbovanje. Ako želimo da imamo šansu u stvaranju značajne tehnologije koja će se protiviti radikalizaciji, moramo da počnemo sa ljudskim putem u svojoj srži.
So we went to Iraq to speak to young men who'd bought into ISIS's promise of heroism and righteousness, who'd taken up arms to fight for them and then who'd defected after they witnessed the brutality of ISIS's rule. And I'm sitting there in this makeshift prison in the north of Iraq with this 23-year-old who had actually trained as a suicide bomber before defecting. And he says, "I arrived in Syria full of hope, and immediately, I had two of my prized possessions confiscated: my passport and my mobile phone." The symbols of his physical and digital liberty were taken away from him on arrival. And then this is the way he described that moment of loss to me. He said, "You know in 'Tom and Jerry,' when Jerry wants to escape, and then Tom locks the door and swallows the key and you see it bulging out of his throat as it travels down?" And of course, I really could see the image that he was describing, and I really did connect with the feeling that he was trying to convey, which was one of doom, when you know there's no way out.
Otišli smo u Irak da pričamo sa mladićem koji je poverovao u obećanje ISIS-a o heroizmu i pravednosti, koji je uzeo oružje da se bori za njih, a onda je dezertirao nakon što je bio očevidac brutalnih pravila ISIS-a. Sedim tamo u improvizovanom zatvoru na severu Iraka sa dvadesettrogodišnjakom koje je trenirao da postane bombaš samoubica pre nego što je dezertirao. On kaže: "Stigao sam u Siriju pun nade, i odmah su mi konfiskovali dve najvažnije stvari: moj pasoš i mobilni telefon." Simboli njegove fizičke i digitalne slobode su mu bili oduzeti po dolasku. A ovo je način kojim je on meni opisao taj momenat gubitka. Rekao je: "Znate u ‘Tomu i Džeriju’, kad Džeri želi da pobegne, pa onda Tom zaključava vrata i proguta ključ, a onda vidite kako se ocrtava na njegovom vratu dok ide ka dole?” Naravno, stvarno sam mogla da vidim sliku koju je opisivao, i zaista sam se povezala sa osećanjem koje je on pokušavao da prenese, koje je bilo osećanje propasti, kad znate da nema izlaza.
And I was wondering: What, if anything, could have changed his mind the day that he left home? So I asked, "If you knew everything that you know now about the suffering and the corruption, the brutality -- that day you left home, would you still have gone?" And he said, "Yes." And I thought, "Holy crap, he said 'Yes.'" And then he said, "At that point, I was so brainwashed, I wasn't taking in any contradictory information. I couldn't have been swayed."
Pitala sam se: šta je, ako išta, moglo da promeni njegovo mišljenje na dan kad je otišao? Pa sam ga pitala: "Da si znao sve što sada znaš o patnji, korupciji i brutalnosti - tog dana kad si otišao, da li bi i pored toga opet otišao?" On je rekao: "Da." Pomislila sam: "Zaboga, rekao je 'da.'" A onda je rekao: "U tom trenutku mi je mozak bio ispran, nisam uzimao u obzir bilo koju protivrečnu informaciju. Nisam mogao biti pokoleban."
"Well, what if you knew everything that you know now six months before the day that you left?"
"Šta da si znao sve što sada znaš šest meseci pre dana kada si otišao?"
"At that point, I think it probably would have changed my mind."
"U tom trenutku, mislim da bi mi to promenilo mišljenje."
Radicalization isn't this yes-or-no choice. It's a process, during which people have questions -- about ideology, religion, the living conditions. And they're coming online for answers, which is an opportunity to reach them. And there are videos online from people who have answers -- defectors, for example, telling the story of their journey into and out of violence; stories like the one from that man I met in the Iraqi prison. There are locals who've uploaded cell phone footage of what life is really like in the caliphate under ISIS's rule. There are clerics who are sharing peaceful interpretations of Islam. But you know what? These people don't generally have the marketing prowess of ISIS. They risk their lives to speak up and confront terrorist propaganda, and then they tragically don't reach the people who most need to hear from them. And we wanted to see if technology could change that.
Radikalizacija nije da ili ne izbor. To je proces tokom kog ljudi imaju pitanja - o ideologiji, religiji, uslovima življenja. Idu na internet u potragu za odgovorima, što je prilika da se do njih dođe. Postoje klipovi na internetu od ljudi koji imaju odgovore - dezerteri, na primer, pričaju priču o svom putovanju u i iz nasilja; priče kao što je od onog čoveka kog sam upoznala u iračkom zatvoru. Ima lokalaca koji su postavljali telefonske snimke o tome kakav je život u kalifatu pod pravilima ISIS-a. Postoje sveštenici koji dele miroljubive interpretacije islama. Ali znate šta? Ovi ljudi obično nemaju marketinšku sposobnost ISIS-a. Oni rizikuju svoje živote da se pročuju i usprotive terorističkoj propagandi, a onda na žalost ne dosegnu do ljudi koji najviše treba da ih čuju. Želeli smo da vidimo da li bi tehnologija mogla to da promeni.
So in 2016, we partnered with Moonshot CVE to pilot a new approach to countering radicalization called the "Redirect Method." It uses the power of online advertising to bridge the gap between those susceptible to ISIS's messaging and those credible voices that are debunking that messaging. And it works like this: someone looking for extremist material -- say they search for "How do I join ISIS?" -- will see an ad appear that invites them to watch a YouTube video of a cleric, of a defector -- someone who has an authentic answer. And that targeting is based not on a profile of who they are, but of determining something that's directly relevant to their query or question.
Tako da smo se 2016. udružili sa "Moonshot CVE" da sprovedemo novi pristup borbe protiv radikalizacije zvan "Metod preusmeravanja." Koristi potencijal online oglašavanja da premosti jaz između onih podložnih porukama ISIS-a i onih uverljivih glasova koji razbijaju iluzije iz tih poruka. Funkcioniše ovako: neko kad traži eksremistički materijal - recimo da pretražuju "Kako da se pridružim ISIS-u?" - videće kako se pojavljuje oglas koji ih poziva da pogledaju YouTube video sveštenika, dezertera - nekoga ko ima autentičan odgovor. I to targetiranje nije zasnovano na profilu njihove ličnosti nego na određivanju nečega što je direktno u vezi sa njihovom nedoumicom ili pitanjem.
During our eight-week pilot in English and Arabic, we reached over 300,000 people who had expressed an interest in or sympathy towards a jihadi group. These people were now watching videos that could prevent them from making devastating choices. And because violent extremism isn't confined to any one language, religion or ideology, the Redirect Method is now being deployed globally to protect people being courted online by violent ideologues, whether they're Islamists, white supremacists or other violent extremists, with the goal of giving them the chance to hear from someone on the other side of that journey; to give them the chance to choose a different path.
Tokom našeg osmonedeljnog pilot programa na engleskom i arapskom, dosegli smo preko 300.000 ljudi koji su izrazili interesovanje ili naklonost prema džihadskoj grupi. Ovi ljudi su sada gledali klipove koji bi mogli da ih spreče od pravljenja poražavajućih izbora. A zbog toga što nasilni ekstremizam nije ograničen samo na jedan jezik, jednu religiju ili ideologiju, "Metod preusmeravanja" sada biva globalno raspoređen da zaštiti ljude od izloženosti nasilnih ideologija na internetu, bilo da su islamisti, beli supremacisti ili drugi nasilni ekstremisti, sa ciljem da im da šansu da čuju od nekoga sa druge strane tog putovanja; da im damo šansu da odaberu drugačiji put.
It turns out that often the bad guys are good at exploiting the internet, not because they're some kind of technological geniuses, but because they understand what makes people tick. I want to give you a second example: online harassment. Online harassers also work to figure out what will resonate with another human being. But not to recruit them like ISIS does, but to cause them pain. Imagine this: you're a woman, you're married, you have a kid. You post something on social media, and in a reply, you're told that you'll be raped, that your son will be watching, details of when and where. In fact, your home address is put online for everyone to see. That feels like a pretty real threat. Do you think you'd go home? Do you think you'd continue doing the thing that you were doing? Would you continue doing that thing that's irritating your attacker?
Ispada da su često loši momci dobri u iskorišćavanju interneta, ne zato što su neki tehnološki geniji, nego zato što razumeju šta pokreće ljude. Hoću da vam dam drugi primer: uznemiravanje na internetu. Uznemirivači na internetu rade da bi shvatili šta će imati uticaja na druge ljude. Ali ne da bi ih regrutovali kao ISIS, nego da im prouzrokuju bol. Zamislite ovo: žena ste, venčani ste, i imate dete. Objavljujete nešto na društvenim mrežama, i u komentarima vam je rečeno da ćete biti silovani, da će vaš sin gledati, detalji vremena i mesta. Zapravo, vaša adresa je stavljena na internet da je svi vide. To izgleda kao prava pretnja. Da li mislite da biste vi otišli kući? Mislite li da biste nastavili da radite stvar koju ste radili? Da li biste nastavili da radite tu stvar koja iritira vašeg napadača?
Online abuse has been this perverse art of figuring out what makes people angry, what makes people afraid, what makes people insecure, and then pushing those pressure points until they're silenced. When online harassment goes unchecked, free speech is stifled. And even the people hosting the conversation throw up their arms and call it quits, closing their comment sections and their forums altogether. That means we're actually losing spaces online to meet and exchange ideas. And where online spaces remain, we descend into echo chambers with people who think just like us. But that enables the spread of disinformation; that facilitates polarization. What if technology instead could enable empathy at scale?
Zloupotreba interneta je izopačena umetnost shvatanja šta nervira ljude, šta ih plaši, šta ih čini nesigurnim, a onda stiskanja tih slabih tačaka dok ih ne ućutkaju. Kad internet uznemiravanje prođe neprovereno, slobodan govor je ugušen. Čak i ljudi koji su osnovali razgovor dižu ruke i napuštaju ga, zatvarajaći u isti mah sekciju sa komentarima i njihove forume. To znači da zapravo gubimo mesta na internetu za upoznavanje i razmenjivanje ideja. I tamo gde internet mesta ostaju, mi ulazimo u eho-komore sa ljudima koji misle isto kao i mi. Ali to omogućava širenje dezinformacija; to olakšava polarizaciju. Šta ako bi tehnologija mogla da omogući empatiju u većoj meri?
This was the question that motivated our partnership with Google's Counter Abuse team, Wikipedia and newspapers like the New York Times. We wanted to see if we could build machine-learning models that could understand the emotional impact of language. Could we predict which comments were likely to make someone else leave the online conversation? And that's no mean feat. That's no trivial accomplishment for AI to be able to do something like that. I mean, just consider these two examples of messages that could have been sent to me last week. "Break a leg at TED!" ... and "I'll break your legs at TED."
Ovo pitanje je motivisalo naše partnerstvo sa Guglovim timom za suzbijanje zloupotreba, Vikipedijom i novinama kao što su Njujork Tajms. Želeli smo da vidimo da li bismo mogli da napravimo modele za mašinsko učenje koji mogu da razumeju emocionalni uticaj jezika. Možemo li predvideti koji komentari će naterati nekoga da napusti internet razgovor? A to nije zloban podvig. To nije trivjalno ostvarenje za VI da bude u mogućnosti da uradi nešto tako. Samo uzmite u obzir ova dva primera poruka koje su mi mogle biti poslate prošle nedelje. "Slomi nogu na TED-u!" ...i "Polomiću ti noge na TED-u!"
(Laughter)
(Smeh)
You are human, that's why that's an obvious difference to you, even though the words are pretty much the same. But for AI, it takes some training to teach the models to recognize that difference. The beauty of building AI that can tell the difference is that AI can then scale to the size of the online toxicity phenomenon, and that was our goal in building our technology called Perspective. With the help of Perspective, the New York Times, for example, has increased spaces online for conversation. Before our collaboration, they only had comments enabled on just 10 percent of their articles. With the help of machine learning, they have that number up to 30 percent. So they've tripled it, and we're still just getting started.
Vi ste ljudi, zato je vama to očigledna razlika, iako su reči skoro iste. Ali za VI je potrebna vežba da bi se podučavali modeli da bi se te razlike prepoznale Lepota izgradnje Vl koja može da kaže razliku je da Vl može da odredi veličinu fenomena zatrovanosti na internetu, i to je bio naš cilj tokom izgradnje naše tehnologije "Perspektiva". Pomoću "Perspektive”, Njujork Tajms, na primer, je povećao mesta na internetu za razgovor. Pre naše saradnje, imali su omogućene komentare na samo 10% svojih članaka. Pomoću mašinskog učenja, povećali su taj broj do 30%. Pa su ih utrostručili, a mi smo tek započeli.
But this is about way more than just making moderators more efficient. Right now I can see you, and I can gauge how what I'm saying is landing with you. You don't have that opportunity online. Imagine if machine learning could give commenters, as they're typing, real-time feedback about how their words might land, just like facial expressions do in a face-to-face conversation. Machine learning isn't perfect, and it still makes plenty of mistakes. But if we can build technology that understands the emotional impact of language, we can build empathy. That means that we can have dialogue between people with different politics, different worldviews, different values. And we can reinvigorate the spaces online that most of us have given up on.
Ovo je o nečemu više od same činjenice da činimo moderatore efikasnijim. Sada mogu da vas vidim i mogu da ocenim kako vam ovo što pričam ulazi u uši. Nemate prilike za to na internetu. Zamislite kad bi mašinsko učenje moglo dati ljudima koji pišu komentare, dok pišu, povratne informacije u realnom vremenu o tome kako njihove reči mogu biti shvaćene, baš kao što izrazi lica to rade u razgovoru licem u lice. Mašinsko učenje nije savršeno, i dalje pravi dosta grešaka. Ali ako možemo izgraditi tehnologiju koja razume emocionalni uticaj jezika, možemo da izgradimo empatiju. To znači da možemo da imamo dijalog između ljudi sa različitim politikama, drugačijim pogledima na svet, drugačijim vrednostima. i možemo da osvežimo mesta na internetu od kojih smo digli ruke.
When people use technology to exploit and harm others, they're preying on our human fears and vulnerabilities. If we ever thought that we could build an internet insulated from the dark side of humanity, we were wrong. If we want today to build technology that can overcome the challenges that we face, we have to throw our entire selves into understanding the issues and into building solutions that are as human as the problems they aim to solve. Let's make that happen.
Kad ljudi koriste tehnologiju da eksploatišu i čine nažao drugima, oni se hvataju za naše strahove i ranjivosti. Ako smo ikad mislili da bismo mogli da izgradimo internet izolovan od mračne strane čovečanstva, pogrešili smo. Ako želimo danas da izgradimo tehnologiju koja može da savlada izazove sa kojima se susrećemo, moramo da damo sve od sebe da shvatimo probleme i da stvaramo rešenja koja su ljudska kao i problemi koje oni ciljaju da reše. Hajde da ostvarimo to.
Thank you.
Hvala vam.
(Applause)
(Aplauz)