My relationship with the internet reminds me of the setup to a clichéd horror movie. You know, the blissfully happy family moves in to their perfect new home, excited about their perfect future, and it's sunny outside and the birds are chirping ... And then it gets dark. And there are noises from the attic. And we realize that that perfect new house isn't so perfect.
我與網際網路的關係, 讓我想起老套恐怖片的情境。 幸福快樂的家庭, 搬進一間美好的新房子, 興奮期待完美的未來, 外頭陽光普照,鳥兒在唱歌…… 接著電影就變黑暗了。 閣樓傳出噪音。 我們發現,那間美好的 新房子並沒有那麼美好。
When I started working at Google in 2006, Facebook was just a two-year-old, and Twitter hadn't yet been born. And I was in absolute awe of the internet and all of its promise to make us closer and smarter and more free. But as we were doing the inspiring work of building search engines and video-sharing sites and social networks, criminals, dictators and terrorists were figuring out how to use those same platforms against us. And we didn't have the foresight to stop them. Over the last few years, geopolitical forces have come online to wreak havoc. And in response, Google supported a few colleagues and me to set up a new group called Jigsaw, with a mandate to make people safer from threats like violent extremism, censorship, persecution -- threats that feel very personal to me because I was born in Iran, and I left in the aftermath of a violent revolution. But I've come to realize that even if we had all of the resources of all of the technology companies in the world, we'd still fail if we overlooked one critical ingredient: the human experiences of the victims and perpetrators of those threats.
2006 年,當我開始 在谷歌(Google)工作時, 臉書(Facebook)才剛推出兩年, 推特(Twitter)甚至還沒問世。 我對網際網路及它所有的承諾 感到絕對的敬畏, 它承諾要讓我們 更靠近且更聰明, 還有給予更多自由。 但當我們開始進行 這鼓舞人心的工作, 建立搜尋引擎, 建立影片分享網站和社交網路, 罪犯、獨裁者, 及恐怖分子都在想辦法 如何用同樣的平台來對抗我們。 我們沒有先見之明來阻止他們。 在過去幾年,地緣政治學的 勢力也上網展開大破壞。 造成的反應是, 谷歌支持我和幾位同事, 成立一個小組,叫做「Jigsaw」, 我們的使命是要讓大家更安全, 避免受到像是極端主義、 審查制度、迫害的威脅—— 我個人對這些威脅特別有感, 因為我是在伊朗出生的, 在一場暴力革命的餘波中, 我被迫離開了伊朗。 但我漸漸了解到, 即使我們有所有的資源, 有全世界所有的科技公司, 如果我們忽略了 一項關鍵因素,我們仍然會失敗: 那些威脅的受害者 與加害者的人類經歷。
There are many challenges I could talk to you about today. I'm going to focus on just two. The first is terrorism. So in order to understand the radicalization process, we met with dozens of former members of violent extremist groups. One was a British schoolgirl, who had been taken off of a plane at London Heathrow as she was trying to make her way to Syria to join ISIS. And she was 13 years old. So I sat down with her and her father, and I said, "Why?" And she said, "I was looking at pictures of what life is like in Syria, and I thought I was going to go and live in the Islamic Disney World." That's what she saw in ISIS. She thought she'd meet and marry a jihadi Brad Pitt and go shopping in the mall all day and live happily ever after.
今天我其實可以 與各位談很多的挑戰。 但我只打算著重兩項: 第一是恐怖主義。 為了要了解激進化的過程, 我們和數十名暴力 極端主義團體的前成員見面。 其中一位是英國的女學生, 她曾在倫敦希斯洛機場 被強拉下飛機, 因為當時她打算 去敘利亞加入伊斯蘭國。 她當時只有十三歲。 我和她及她父親坐下來談, 我說:「為什麼?」 她說: 「我在看一些敘利亞 生活寫照的圖片, 我以為我是要去住到 伊斯蘭的迪士尼樂園。」 這是她看到的伊斯蘭國。 她以為她要去見一位聖戰士中的 布萊德彼得並嫁給他, 整天都能去購物, 從此幸福快樂地生活。
ISIS understands what drives people, and they carefully craft a message for each audience. Just look at how many languages they translate their marketing material into. They make pamphlets, radio shows and videos in not just English and Arabic, but German, Russian, French, Turkish, Kurdish, Hebrew, Mandarin Chinese. I've even seen an ISIS-produced video in sign language. Just think about that for a second: ISIS took the time and made the effort to ensure their message is reaching the deaf and hard of hearing. It's actually not tech-savviness that is the reason why ISIS wins hearts and minds. It's their insight into the prejudices, the vulnerabilities, the desires of the people they're trying to reach that does that. That's why it's not enough for the online platforms to focus on removing recruiting material. If we want to have a shot at building meaningful technology that's going to counter radicalization, we have to start with the human journey at its core.
伊斯蘭國知道什麼能驅使人, 他們會為每一位觀眾 精心策劃一則訊息。 光是去看看他們把他們的行銷素材 翻譯成多少語言,就能了解了。 他們會製作小冊子、 廣播節目,和影片, 不只用英語和阿拉伯語, 還有德語、俄語、法語、 土耳其語、庫德語、 希伯來語、 華語(中文)。 我甚至看過一支伊斯蘭國 製作的影片是用手語的。 花點時間思考一下: 伊斯蘭國投入時間和心力, 來確保他們的訊息 能夠傳達給聽障人士。 伊斯蘭國能贏得人心和人信, 並不是因為他們很精通科技。 因為他們有洞見,了解 他們試圖接觸的人有什麼 偏見、脆弱、慾望, 才能做到這一點。 那就說明了為什麼 線上平台只把焦點放在 移除召募素材是不足的。 如果我想要有機會建立 一種有意義的技術, 用它來對抗極端化, 我們就得要從它核心的 人類旅程開始著手。
So we went to Iraq to speak to young men who'd bought into ISIS's promise of heroism and righteousness, who'd taken up arms to fight for them and then who'd defected after they witnessed the brutality of ISIS's rule. And I'm sitting there in this makeshift prison in the north of Iraq with this 23-year-old who had actually trained as a suicide bomber before defecting. And he says, "I arrived in Syria full of hope, and immediately, I had two of my prized possessions confiscated: my passport and my mobile phone." The symbols of his physical and digital liberty were taken away from him on arrival. And then this is the way he described that moment of loss to me. He said, "You know in 'Tom and Jerry,' when Jerry wants to escape, and then Tom locks the door and swallows the key and you see it bulging out of his throat as it travels down?" And of course, I really could see the image that he was describing, and I really did connect with the feeling that he was trying to convey, which was one of doom, when you know there's no way out.
所以,我們去了伊拉克, 去和年輕人交談, 我們找的對象曾相信伊斯蘭國 所做的關於英雄主義與公正的承諾, 曾拿起武器為他們作戰, 接著,在目擊了 伊斯蘭國統治的殘酷之後選擇變節。 我坐在北伊拉克的一間臨時監獄裡, 會見一位在變節前 受過訓練的自殺炸彈客, 年僅 23 歲。 他說: 「我到敘利亞時滿懷著希望, 可一下子我兩項最重要的 東西就被沒收了: 我的護照和我的手機。」 在他抵達時, 這兩樣象徵他實體自由 與數位自由的東西被奪去了。 接著,他這樣向我描述迷失的時刻。 他說: 「在《湯姆貓與傑利鼠》中, 當傑利想要逃脫時,湯姆把門鎖住, 把鑰匙吞掉, 還可以從外表形狀看到鑰匙 延著喉嚨下滑,記得嗎?」 當然,我能看見他所描述的畫面, 我真的能和他試圖傳達的 這種感受產生連結, 一種在劫難逃的感受, 你知道沒有路可逃了。
And I was wondering: What, if anything, could have changed his mind the day that he left home? So I asked, "If you knew everything that you know now about the suffering and the corruption, the brutality -- that day you left home, would you still have gone?" And he said, "Yes." And I thought, "Holy crap, he said 'Yes.'" And then he said, "At that point, I was so brainwashed, I wasn't taking in any contradictory information. I couldn't have been swayed."
而我很納悶: 在他離家的那一天,如果有的話, 什麼能改變他的心意? 於是,我問: 「如果你當時知道 你現在知道的這些 關於苦難、腐敗、殘酷的狀況—— 在離家的那天就知道, 你還會選擇離開嗎?」 他說:「會。」 我心想:「老天爺,他說『會』。」 接著他說: 「在那個時點,我完全被洗腦了, 我不會接受任何有所矛盾的資訊。 我當時不可能被動搖。」
"Well, what if you knew everything that you know now six months before the day that you left?"
「那麼如果你在你離開前 六個月就已經知道 你現在知道的這些,結果會如何?」
"At that point, I think it probably would have changed my mind."
「若在那個時點, 我想我可能會改變心意。」
Radicalization isn't this yes-or-no choice. It's a process, during which people have questions -- about ideology, religion, the living conditions. And they're coming online for answers, which is an opportunity to reach them. And there are videos online from people who have answers -- defectors, for example, telling the story of their journey into and out of violence; stories like the one from that man I met in the Iraqi prison. There are locals who've uploaded cell phone footage of what life is really like in the caliphate under ISIS's rule. There are clerics who are sharing peaceful interpretations of Islam. But you know what? These people don't generally have the marketing prowess of ISIS. They risk their lives to speak up and confront terrorist propaganda, and then they tragically don't reach the people who most need to hear from them. And we wanted to see if technology could change that.
極端化並不是關於是非的選擇。 它是一個過程,在這過程中, 人們會有問題—— 關於意識型態、宗教、 生活條件的問題。 他們會上網尋求答案, 這就是一個接觸他們的機會。 有答案的人會在網路提供影片—— 比如,叛逃者訴說他們 投入和脫離暴力的心路歷程; 就像我在伊拉克監獄見到的 那名男子告訴我的故事。 有當地人會上傳手機影片, 呈現在伊斯蘭國統治之下 穆斯林國的真實生活樣貌。 有教會聖職人員分享 關於伊斯蘭的和平詮釋。 但你們知道嗎? 這些人通常都沒有 伊斯蘭國的高超行銷本領。 他們冒著生命危險說出來, 和恐怖主義的宣傳對質, 但很不幸的是,他們無法接觸到 最需要聽到他們聲音的人。 我們想試看看, 科技是否能改變這一點。
So in 2016, we partnered with Moonshot CVE to pilot a new approach to countering radicalization called the "Redirect Method." It uses the power of online advertising to bridge the gap between those susceptible to ISIS's messaging and those credible voices that are debunking that messaging. And it works like this: someone looking for extremist material -- say they search for "How do I join ISIS?" -- will see an ad appear that invites them to watch a YouTube video of a cleric, of a defector -- someone who has an authentic answer. And that targeting is based not on a profile of who they are, but of determining something that's directly relevant to their query or question.
2016 年,我們和 Moonshot CVE (信息泄露組織)合作, 試驗一種對抗極端化的新方法, 叫做「重新定向法」。 它用線上廣告的力量, 在容易受到伊斯蘭國訊息影響的人 與揭發那些訊息真面目的 可靠聲音之間搭起橋樑。 它的運作方式如下: 有人在尋找極端主義的素材—— 比如他們搜尋 「如何加入伊斯蘭國?」—— 就會有一則廣告出現, 邀請他們上 YouTube, 看聖職人員、變節者的影片—— 有真實答案的人所拍的影片。 這個方法鎖定目標對象的方式 不是依據個人資料, 而是由與他們的查詢或問題有直接 相關的東西來決定。
During our eight-week pilot in English and Arabic, we reached over 300,000 people who had expressed an interest in or sympathy towards a jihadi group. These people were now watching videos that could prevent them from making devastating choices. And because violent extremism isn't confined to any one language, religion or ideology, the Redirect Method is now being deployed globally to protect people being courted online by violent ideologues, whether they're Islamists, white supremacists or other violent extremists, with the goal of giving them the chance to hear from someone on the other side of that journey; to give them the chance to choose a different path.
我們用英語和阿拉伯語 做了八週的測試, 接觸到了超過三十萬人, 他們都是對聖戰團體 表示感興趣或同情的人。 現在這些人在看的影片 能預防他們做出毀滅性的選擇。 因為暴力極端主義 不侷限於任何一種語言、 宗教,或意識形態, 「重新定向法」現已在全球實施, 保護大家上網時不會受到 暴力意識形態的引誘, 不論是伊斯蘭教的、 白人至上主義的, 或其他暴力極端主義的, 我們的目標是要 給他們機會去聽聽看 在那旅程另一端的人怎麼說; 給他們機會去選擇不同的路。
It turns out that often the bad guys are good at exploiting the internet, not because they're some kind of technological geniuses, but because they understand what makes people tick. I want to give you a second example: online harassment. Online harassers also work to figure out what will resonate with another human being. But not to recruit them like ISIS does, but to cause them pain. Imagine this: you're a woman, you're married, you have a kid. You post something on social media, and in a reply, you're told that you'll be raped, that your son will be watching, details of when and where. In fact, your home address is put online for everyone to see. That feels like a pretty real threat. Do you think you'd go home? Do you think you'd continue doing the thing that you were doing? Would you continue doing that thing that's irritating your attacker?
事實證明, 壞人通常擅長利用網際網路, 並不是因為他們是什麼科技天才, 而是因為他們了解人的癢處。 我再舉個例子說明: 線上騷擾。 線上騷擾者也在致力於 找出什麼能讓 另一個人產生共鳴。 但他們的目的不像 伊斯蘭國是要招募人, 而是造成別人痛苦。 想像這個狀況: 你是一名女子, 已婚, 有一個孩子。 你在社交媒體上發了一篇文, 你得到一則回應,說你會被強暴, 你的兒子會被監視, 還有時間和地點的細節資訊。 事實上,在網路上大家 都能看到你家的地址。 那威脅感覺十分真實。 你認為你會回家嗎? 你認為你會繼續做你正在做的事嗎? 你會繼續做那件惹惱了 攻擊你的人的那件事嗎?
Online abuse has been this perverse art of figuring out what makes people angry, what makes people afraid, what makes people insecure, and then pushing those pressure points until they're silenced. When online harassment goes unchecked, free speech is stifled. And even the people hosting the conversation throw up their arms and call it quits, closing their comment sections and their forums altogether. That means we're actually losing spaces online to meet and exchange ideas. And where online spaces remain, we descend into echo chambers with people who think just like us. But that enables the spread of disinformation; that facilitates polarization. What if technology instead could enable empathy at scale?
線上虐待一直都是種 刻意作對的藝術, 找出什麼能讓人生氣, 什麼能讓人害怕, 什麼能讓人沒有安全感, 接著去壓那些對壓力敏感之處, 直到它們被壓制下來。 當線上騷擾不受管束時, 自由言論就會被扼殺。 即使主持對話的人 棄械並宣佈到此為止, 把他們的留言區以及論壇都關閉。 那意味著,我們其實正在失去線上 可以碰面交換點子的空間。 在還有線上空間的地方, 我們會陷入到迴音室當中, 只和相同想法的人聚在一起。 但那會造成錯誤訊息被散佈出去; 那會促成兩極化。 但如果反之能用科技 來大量產生同理心呢?
This was the question that motivated our partnership with Google's Counter Abuse team, Wikipedia and newspapers like the New York Times. We wanted to see if we could build machine-learning models that could understand the emotional impact of language. Could we predict which comments were likely to make someone else leave the online conversation? And that's no mean feat. That's no trivial accomplishment for AI to be able to do something like that. I mean, just consider these two examples of messages that could have been sent to me last week. "Break a leg at TED!" ... and "I'll break your legs at TED."
就是這個問題 驅使我們和谷歌的反虐待小組、 維基, 以及像紐約時報這類報紙合作。 我們想要看看我們 是否能建立出能夠了解 語言會帶來什麼情緒影響的 機器學習模型, 我們能否預測什麼樣的意見 有可能會讓另一個人 離開線上對談? 那不是容易的事。 人工智慧要做到 那樣的事,並不是理所當然。 我是指,想想這兩個例子, 都是在上週我有可能 會收到的訊息。 「祝在 TED 順利!」 (直譯:在 TED 斷一條腿。) 以及 「我會在 TED 打斷你一條腿。」
(Laughter)
(笑聲)
You are human, that's why that's an obvious difference to you, even though the words are pretty much the same. But for AI, it takes some training to teach the models to recognize that difference. The beauty of building AI that can tell the difference is that AI can then scale to the size of the online toxicity phenomenon, and that was our goal in building our technology called Perspective. With the help of Perspective, the New York Times, for example, has increased spaces online for conversation. Before our collaboration, they only had comments enabled on just 10 percent of their articles. With the help of machine learning, they have that number up to 30 percent. So they've tripled it, and we're still just getting started.
你們是人, 那就是為何你們能明顯看出, 用字幾乎相同的兩個句子有何差別。 但對人工智慧來說, 要透過訓練來教導模型 去辨視那差別。 建立能夠分辨出那差別的 人工智慧,有個美好之處, 就是人工智慧能處理 線上毒素現象的規模, 為此目的,我們建立了 一種出名為「觀點」的技術。 在「觀點」的協助下, 以紐約時報為例, 他們增加了線上交談的空間。 在與我們合作之前, 他們的文章只有 大約 10% 有開放留言。 在機器學習的協助下, 這個數字提升到了 30%。 增加了三倍, 且我們才剛開始合作而已。
But this is about way more than just making moderators more efficient. Right now I can see you, and I can gauge how what I'm saying is landing with you. You don't have that opportunity online. Imagine if machine learning could give commenters, as they're typing, real-time feedback about how their words might land, just like facial expressions do in a face-to-face conversation. Machine learning isn't perfect, and it still makes plenty of mistakes. But if we can build technology that understands the emotional impact of language, we can build empathy. That means that we can have dialogue between people with different politics, different worldviews, different values. And we can reinvigorate the spaces online that most of us have given up on.
這絕對不只是讓版主更有效率。 現在,我可以看見你們, 我可以估量我所說的話 會如何對你們產生影響。 在線上沒有這樣的機會。 想像一下, 當留言者在打字的時候, 如果機器學習能夠 即使給他們回饋, 說明他們的文字會造成什麼影響, 就像在面對面交談時, 面部表情的功能。 機器學習並不完美, 它仍然會犯許多錯誤。 但如果我們能建立出 能了解語言有什麼 情緒影響力的技術, 我們就能建立同理心。 那就表示,我們能讓兩個人對話, 即使他們政治立場不同, 世界觀不同, 價值觀不同。 我們能讓大部分人已經放棄的 線上空間再度復甦。
When people use technology to exploit and harm others, they're preying on our human fears and vulnerabilities. If we ever thought that we could build an internet insulated from the dark side of humanity, we were wrong. If we want today to build technology that can overcome the challenges that we face, we have to throw our entire selves into understanding the issues and into building solutions that are as human as the problems they aim to solve. Let's make that happen.
當人們用科技來利用和傷害他人時, 他們靠的是我們人類的恐懼和脆弱。 如果我們認為我們能夠建立一個完全 沒有人性黑暗面的網際網路, 我們就錯了。 如果現今我們想要建立技術 來克服我們面臨的挑戰, 我們就得把自己全心全意投入, 並去了解這些議題, 去建立人類的解決方案, 來解決人類的問題。 讓我們來實現它吧。
Thank you.
謝謝。
(Applause)
(掌聲)