My relationship with the internet reminds me of the setup to a clichéd horror movie. You know, the blissfully happy family moves in to their perfect new home, excited about their perfect future, and it's sunny outside and the birds are chirping ... And then it gets dark. And there are noises from the attic. And we realize that that perfect new house isn't so perfect.
我与互联网的关系,使我想起 恐怖片的老套情节。 幸福快乐的家庭乔迁新居, 憧憬着他们美好的未来, 窗外阳光明媚,鸟儿嘤嘤成韵... 突然,电影的气氛变得阴暗起来。 阁楼传来阵阵嘈杂的声响。 我们这才意识到,那间美好的 新房子,其实问题重重。
When I started working at Google in 2006, Facebook was just a two-year-old, and Twitter hadn't yet been born. And I was in absolute awe of the internet and all of its promise to make us closer and smarter and more free. But as we were doing the inspiring work of building search engines and video-sharing sites and social networks, criminals, dictators and terrorists were figuring out how to use those same platforms against us. And we didn't have the foresight to stop them. Over the last few years, geopolitical forces have come online to wreak havoc. And in response, Google supported a few colleagues and me to set up a new group called Jigsaw, with a mandate to make people safer from threats like violent extremism, censorship, persecution -- threats that feel very personal to me because I was born in Iran, and I left in the aftermath of a violent revolution. But I've come to realize that even if we had all of the resources of all of the technology companies in the world, we'd still fail if we overlooked one critical ingredient: the human experiences of the victims and perpetrators of those threats.
2006年,当我开始在谷歌工作的时候, 脸书刚刚推出两年, 推特还未问世。 我对互联网和它给予的承诺充满赞叹。 它承诺让我们 关系更亲密,变得更聪明, 并给予我们更多自由。 但是当我们开始进行这项 鼓舞人心的工作,开发搜索引擎, 视频分享和社交网络, 罪犯,独裁者和恐怖分子, 也找到了相同的平台,来攻击我们。 我们并未预见到这一点, 因而对此毫无抵抗力。 过去几年里,地缘政治势力 开始在网络上肆虐。 作为反击, 在谷歌的支持下,我和几位同事 成了一个名叫“Jigsaw”的新组织, 我们肩负重任,去维护人们的安全, 让大众免受暴力极端主义, 审查制度和迫害的威胁—— 我对这些威胁深有体会, 因为我在伊朗出生, 在暴力革命的余波中,我离开了伊朗。 但我逐渐意识到, 即便我们拥有世界上 所有科技公司的资源, 我们仍然会失败, 如果我们忽视那个决定性的因素: 带来这些威胁的犯罪分子 和受害群众背后的经历。
There are many challenges I could talk to you about today. I'm going to focus on just two. The first is terrorism. So in order to understand the radicalization process, we met with dozens of former members of violent extremist groups. One was a British schoolgirl, who had been taken off of a plane at London Heathrow as she was trying to make her way to Syria to join ISIS. And she was 13 years old. So I sat down with her and her father, and I said, "Why?" And she said, "I was looking at pictures of what life is like in Syria, and I thought I was going to go and live in the Islamic Disney World." That's what she saw in ISIS. She thought she'd meet and marry a jihadi Brad Pitt and go shopping in the mall all day and live happily ever after.
我今天可以和各位谈很多 我们所面临的挑战。 不过我打算只着重于两点。 其一是恐怖主义。 为了了解那种激进的过程, 我们同数十名暴力极端主义的 前成员进行了接触。 其中有一位英国的女学生, 在伦敦希思罗机场,被迫下飞机, 因为她打算到叙利亚加入ISIS组织。 当时她只有13岁。 我同她和她父亲坐下来谈话, 我问到:“你为什么要这么做?” 她说: “我看到了叙利亚的生活写照, 我以为我会住进伊斯兰的迪斯尼乐园。” 这是她所看到ISIS组织。 她觉得,她会邂逅并嫁给 圣战分子中的布拉德 · 皮特, 整天在商场购物, 从此过上幸福快乐的生活。
ISIS understands what drives people, and they carefully craft a message for each audience. Just look at how many languages they translate their marketing material into. They make pamphlets, radio shows and videos in not just English and Arabic, but German, Russian, French, Turkish, Kurdish, Hebrew, Mandarin Chinese. I've even seen an ISIS-produced video in sign language. Just think about that for a second: ISIS took the time and made the effort to ensure their message is reaching the deaf and hard of hearing. It's actually not tech-savviness that is the reason why ISIS wins hearts and minds. It's their insight into the prejudices, the vulnerabilities, the desires of the people they're trying to reach that does that. That's why it's not enough for the online platforms to focus on removing recruiting material. If we want to have a shot at building meaningful technology that's going to counter radicalization, we have to start with the human journey at its core.
ISIS组织对于驾驭人心了如指掌, 他们为每一位聆听者 精心策划了要传递的信息。 光看他们的营销材料 被译成了多少种语言,便能一目了然。 他们还制作小册子,广播节目和影片, 不但有英语和阿拉伯语, 还有德语,俄语,法语, 土耳其语,库尔德语, 希伯来语 和汉语。 我甚至看到了ISIS组织 制作的手语视频。 想想吧,ISIS组织耗费了 这么多时间和精力, 做出种种行为 以确保他们的信息 能够传达给残障人士。 其实ISIS组织赢得人心与信赖, 并非因为他们精通科技。 事实上,他们洞悉力很强, 深知他们试图接触的人 所拥有的偏见,脆弱和欲望, 因此才能实现其目的。 这就为什么 对网络平台来说,把焦点放在 移除他们的招募材料上是远远不够的。 如果我们想要尝试建立有意义的技术, 用以打击激进主义, 我们就必须从人心着手。
So we went to Iraq to speak to young men who'd bought into ISIS's promise of heroism and righteousness, who'd taken up arms to fight for them and then who'd defected after they witnessed the brutality of ISIS's rule. And I'm sitting there in this makeshift prison in the north of Iraq with this 23-year-old who had actually trained as a suicide bomber before defecting. And he says, "I arrived in Syria full of hope, and immediately, I had two of my prized possessions confiscated: my passport and my mobile phone." The symbols of his physical and digital liberty were taken away from him on arrival. And then this is the way he described that moment of loss to me. He said, "You know in 'Tom and Jerry,' when Jerry wants to escape, and then Tom locks the door and swallows the key and you see it bulging out of his throat as it travels down?" And of course, I really could see the image that he was describing, and I really did connect with the feeling that he was trying to convey, which was one of doom, when you know there's no way out.
因此,我们去了伊拉克, 和年轻人交谈,他们深信ISIS组织 关于英勇与正义的承诺, 曾为ISIS组织斗争, 接着,他们在目睹ISIS组织 统治的残忍无情后,选择变节。 在伊拉克北部的临时监狱中, 我面见了一位年仅23岁, 在变节前受过自杀式炸弹训练的 年轻人。 他说, “我满怀希望来到叙利亚, 但马上,我被没收了 两项最重要的东西: 我的护照和手机。” 从他到达叙利亚的那刻,象征他人身自由 和电子自由的东西,就被剥夺了。 接着,他向我描述了那个迷失的时刻。 他说, “在《猫和老鼠》中, 当杰瑞想要逃脱时,汤姆总是锁上门, 并吞下钥匙, 还可以从外观看出钥匙 沿着汤姆的喉咙下滑,记得吗? 当然,我完全想象得到 他所描绘的那种画面, 也完全能体会到他想要传达的感受, 一种在劫难逃的感觉, 当你知道无路可走了。
And I was wondering: What, if anything, could have changed his mind the day that he left home? So I asked, "If you knew everything that you know now about the suffering and the corruption, the brutality -- that day you left home, would you still have gone?" And he said, "Yes." And I thought, "Holy crap, he said 'Yes.'" And then he said, "At that point, I was so brainwashed, I wasn't taking in any contradictory information. I couldn't have been swayed."
我很纳闷: 在他离开家的那天,是否有任何东西 有可能改变他的想法? 所以我问: “如果你当时知晓现在面临的苦难, 腐败和残酷的状况—— 在你离开家的那天就知道, 你还会选择离开吗?“ 他说:“会的。“ 我心想:“我的天啊,他尽然说‘会’。” 接着他说: “我那时被完全洗脑了, 没有接受任何与之矛盾的信息。 没人能动摇我的决心。“
"Well, what if you knew everything that you know now six months before the day that you left?"
“好吧,如果在你离开的六个月前, 就已知道现在发生的事情呢?”
"At that point, I think it probably would have changed my mind."
“我想那时我可能会改变心意。”
Radicalization isn't this yes-or-no choice. It's a process, during which people have questions -- about ideology, religion, the living conditions. And they're coming online for answers, which is an opportunity to reach them. And there are videos online from people who have answers -- defectors, for example, telling the story of their journey into and out of violence; stories like the one from that man I met in the Iraqi prison. There are locals who've uploaded cell phone footage of what life is really like in the caliphate under ISIS's rule. There are clerics who are sharing peaceful interpretations of Islam. But you know what? These people don't generally have the marketing prowess of ISIS. They risk their lives to speak up and confront terrorist propaganda, and then they tragically don't reach the people who most need to hear from them. And we wanted to see if technology could change that.
极端激进并不是一道选择题。 它是人充满疑惑时的一个过程, 关于思想意识,宗教信仰,和生活条件。 人们会在网上寻求答案, 这就为我们提供了接触他们的机会。 有答案的人在网上提供影片—— 比如,叛逃者讲述他们 投入和摆脱暴力的 心路历程; 就像伊拉克监狱那名男子的故事。 有当地人会上传手机影片, 呈现在ISIS组织统治下, 穆斯林的生活状态。 有教会圣职人士分享关于 伊斯兰教的和平诠释。 但你们知道吗? 这些人通常并不具备ISIS组织 那样高超的营销本领。 他们冒着生命危险, 与恐怖主义宣传对质, 不幸的是,他们无法接触到 最需要听到他们声音的人。 我们想试试看, 科技是否能够改变这一点。
So in 2016, we partnered with Moonshot CVE to pilot a new approach to countering radicalization called the "Redirect Method." It uses the power of online advertising to bridge the gap between those susceptible to ISIS's messaging and those credible voices that are debunking that messaging. And it works like this: someone looking for extremist material -- say they search for "How do I join ISIS?" -- will see an ad appear that invites them to watch a YouTube video of a cleric, of a defector -- someone who has an authentic answer. And that targeting is based not on a profile of who they are, but of determining something that's directly relevant to their query or question.
所以在2016年,我们与 Moonshot CVE合作, 试着采用新方案打击激进主义, 该方法名叫“重新定向法。” 它利用线上广告的力量, 在摇摆不的人们和 ISIS组织的信息之间搭建桥梁, 那些可靠的声音揭露了 大量信息的真面目。 它是这样运作的: 当有人在寻找极端主义的材料—— 比如,搜索 “我如何 能加入ISIS组织?”—— 就会看见一则广告出现, 邀请他们观看圣职人士和变节人员 在Youtube网站上的视频—— 那些拥有真实答案的人的心声。 这个方法锁定目标的方式, 不是依据他们的个人资料, 而是由与询问和质疑 直接相关的经历来决定。
During our eight-week pilot in English and Arabic, we reached over 300,000 people who had expressed an interest in or sympathy towards a jihadi group. These people were now watching videos that could prevent them from making devastating choices. And because violent extremism isn't confined to any one language, religion or ideology, the Redirect Method is now being deployed globally to protect people being courted online by violent ideologues, whether they're Islamists, white supremacists or other violent extremists, with the goal of giving them the chance to hear from someone on the other side of that journey; to give them the chance to choose a different path.
在用英语和阿拉伯语 进行的八周试验中, 我们接触了超过30万 对圣战组织表现出兴趣或者同情的人。 这些人现在观看的影片, 能预防他们做出毁灭性的选择。 因为暴力极端主义 并不局限于任何一种语言, 宗教和思想, “重新定向法”现已在全球实施, 保护大家上网时不被 暴力的意识形态诱惑, 不论是伊斯兰教,白人至上主义 或者其他暴力极端主义, 我们的目标是给他们机会,去聆听那些 旅程另一端的人怎么说; 帮助他们去选择不一样的人生道路。
It turns out that often the bad guys are good at exploiting the internet, not because they're some kind of technological geniuses, but because they understand what makes people tick. I want to give you a second example: online harassment. Online harassers also work to figure out what will resonate with another human being. But not to recruit them like ISIS does, but to cause them pain. Imagine this: you're a woman, you're married, you have a kid. You post something on social media, and in a reply, you're told that you'll be raped, that your son will be watching, details of when and where. In fact, your home address is put online for everyone to see. That feels like a pretty real threat. Do you think you'd go home? Do you think you'd continue doing the thing that you were doing? Would you continue doing that thing that's irritating your attacker?
结果表明,恶人通常擅长利用网络, 并不是因为他们是科技天才, 而是因为他们理解什么能引人注意。 我再举个例子: 网络骚扰。 线网络骚扰者也致力于发掘能够 与他人引起共鸣的事物。 但他们的目的不是 像ISIS组织一样招募人, 而是给他人带来痛苦。 想象一下这个状况: 你是个女人, 已婚, 有一个孩子。 你在社交媒体上发表了一篇文章, 在评论中,有人说你会被强暴, 你的儿子 会在何时何地被监视。 事实上,所有人都能在网上 搜索到你的家庭地址。 那种威胁的感觉十分真实。 你觉得你还会回家吗? 你觉得你会对此无动于衷吗? 你会继续做惹恼了 攻击你的人的那件事吗?
Online abuse has been this perverse art of figuring out what makes people angry, what makes people afraid, what makes people insecure, and then pushing those pressure points until they're silenced. When online harassment goes unchecked, free speech is stifled. And even the people hosting the conversation throw up their arms and call it quits, closing their comment sections and their forums altogether. That means we're actually losing spaces online to meet and exchange ideas. And where online spaces remain, we descend into echo chambers with people who think just like us. But that enables the spread of disinformation; that facilitates polarization. What if technology instead could enable empathy at scale?
网络欺凌已经变成了 一种变态的艺术, 发掘让人愤怒, 让人害怕, 让人们失去安全感的事物, 然后给他们施压, 直至他们永远保持沉默。 当线上骚扰者肆意妄为时, 言论自由就会被扼杀。 即使是主持对话的人, 也举手投降并宣布到此为止, 把他们的留言区以及论坛全部关闭。 这意味着我们其实失去了在网上 让思想碰撞与交流的空间。 在网络剩下的空间里, 我们被困在了“回音室”,只和 想法一致的人聚在一起。 但那会导致虚假信息的传播; 这会更加促成两极分化。 反之,如果能用科技产生大量共鸣呢?
This was the question that motivated our partnership with Google's Counter Abuse team, Wikipedia and newspapers like the New York Times. We wanted to see if we could build machine-learning models that could understand the emotional impact of language. Could we predict which comments were likely to make someone else leave the online conversation? And that's no mean feat. That's no trivial accomplishment for AI to be able to do something like that. I mean, just consider these two examples of messages that could have been sent to me last week. "Break a leg at TED!" ... and "I'll break your legs at TED."
这驱使我们 与谷歌反虐待小组, 维基百科, 和《纽约时报》这类报刊合作。 我们想要看看是否可以获悉 机器学习的模型来理解 语言所造成的情感影响。 我们能否推测一下, 什么样的评论最可能让人 逃离网络聊天? 这个问题很难回答。 对人工智能来说, 能完成这些并非理所当然。 想想这两个例子, 我上周可能收到的信息。 “祝TED演讲顺利!” 以及 “我会在TED现场打断你的腿。”
(Laughter)
(笑声)
You are human, that's why that's an obvious difference to you, even though the words are pretty much the same. But for AI, it takes some training to teach the models to recognize that difference. The beauty of building AI that can tell the difference is that AI can then scale to the size of the online toxicity phenomenon, and that was our goal in building our technology called Perspective. With the help of Perspective, the New York Times, for example, has increased spaces online for conversation. Before our collaboration, they only had comments enabled on just 10 percent of their articles. With the help of machine learning, they have that number up to 30 percent. So they've tripled it, and we're still just getting started.
你们是人, 所以能够明显感受到 两句话意思的不同, 即使字句基本一致。 但对于人工智能来说, 要对模型进行一定的训练, 才能辨析那些不同之处。 创建能分辨差异的人工智能, 其美妙之处在于, 人工智能可以大规模 防治网络毒害现象, 为了实现这一目标, 我们打造了名叫“观点”的技术。 在“观点”的帮助下, 以《纽约时报》为例, 他们增加了线上讨论的空间。 在与我们合作前, 他们的文章只有约10%的开放评论。 在机器学习的协助下, 这个数字增加到了30%。 涨了两倍, 这项合作才刚刚开始而已。
But this is about way more than just making moderators more efficient. Right now I can see you, and I can gauge how what I'm saying is landing with you. You don't have that opportunity online. Imagine if machine learning could give commenters, as they're typing, real-time feedback about how their words might land, just like facial expressions do in a face-to-face conversation. Machine learning isn't perfect, and it still makes plenty of mistakes. But if we can build technology that understands the emotional impact of language, we can build empathy. That means that we can have dialogue between people with different politics, different worldviews, different values. And we can reinvigorate the spaces online that most of us have given up on.
但这绝不仅仅是让版主更有效率而已。 现在,我可以看见你们, 我可以估量我的用词 将对你们产生怎样的影响。 线上并没有这样的机会。 想象一下,在网友打字的时候, 如果机器学习能够 对他们的话语可能造成的影响 提供即时的反馈,会如何。 就像面对面交谈时的脸部表情。 机器学习并不完美, 它仍然会犯许多的错误。 但如果我们能够打造 理解语言中的情感影响力的技术, 我们就能建立更多的同理心。 那就意味着,我们可以让两人进行对话, 即使他们所属的政治体系不同, 世界观不同, 价值观不同。 我们可以让大多数人已经 放弃的网络空间得到复苏。
When people use technology to exploit and harm others, they're preying on our human fears and vulnerabilities. If we ever thought that we could build an internet insulated from the dark side of humanity, we were wrong. If we want today to build technology that can overcome the challenges that we face, we have to throw our entire selves into understanding the issues and into building solutions that are as human as the problems they aim to solve. Let's make that happen.
当人们利用科技剥削和伤害他人时, 他们靠着人类的恐惧与脆弱,肆意掠夺。 如果我们想当然,认为能够建立一个 和人性阴暗面绝缘的网络, 那就大错特错了。 现如今,如果我们想要发展 能够克服当前所面临的挑战的科技, 就必须全身心投入,深刻理解问题, 进而找到解决方法, 来克服人类自身的种种问题。 让我们一起实现它吧。
Thank you.
谢谢大家!
(Applause)
(掌声)