No matter who you are or where you live, I'm guessing that you have at least one relative that likes to forward those emails. You know the ones I'm talking about -- the ones with dubious claims or conspiracy videos. And you've probably already muted them on Facebook for sharing social posts like this one.
不論你是誰,不論你住在哪裡, 我猜想你應該至少有一位親戚 會喜歡轉傳那些電子郵件。 你們知道,我指的是那種 內容可疑的郵件或陰謀論的影片。 你可能已經因為他們 會分享這類社交貼文 而在臉書上取消追蹤他們了。
It's an image of a banana with a strange red cross running through the center. And the text around it is warning people not to eat fruits that look like this, suggesting they've been injected with blood contaminated with the HIV virus. And the social share message above it simply says, "Please forward to save lives." Now, fact-checkers have been debunking this one for years, but it's one of those rumors that just won't die. A zombie rumor. And, of course, it's entirely false.
像這張圖片中的香蕉 中間有個奇怪的紅色十字。 周圍的文字則是在警告大家 不要吃這樣的食物, 指出這類食物已經被注入了 感染愛滋病病毒的血液。 分享這個資訊的社群只是說: 「請轉傳,以拯救人命。」 事實查核早在數年前 就揭穿這則假消息了, 但有些謠言就是殺不死, 這是其中之一。 喪屍級謠言。 當然,這完全是錯誤資訊。
It might be tempting to laugh at an example like this, to say, "Well, who would believe this, anyway?" But the reason it's a zombie rumor is because it taps into people's deepest fears about their own safety and that of the people they love. And if you spend as enough time as I have looking at misinformation, you know that this is just one example of many that taps into people's deepest fears and vulnerabilities.
我們會很想笑看這種例子,並說: 「反正,誰會相信啊?」 但,它之所以是喪屍級謠言, 就是因為它觸及了 人們最深的恐懼, 擔心他們自己和愛人的安全。 如果你跟我一樣花足夠 時間去研究錯誤資訊, 你會就知道,還有很多像這樣的例子, 都會觸及人們最深的恐懼和弱點。
Every day, across the world, we see scores of new memes on Instagram encouraging parents not to vaccinate their children. We see new videos on YouTube explaining that climate change is a hoax. And across all platforms, we see endless posts designed to demonize others on the basis of their race, religion or sexuality.
每天,在全世界都會在 IG 上 看到許多新的網路迷因, 鼓勵父母不要帶孩子去接種疫苗。 在 YouTube 上會有新的影片出來, 解釋氣候變遷只是個騙局。 在各平台上,都有無數貼文 目的是在將某些種族、宗教, 或性向的人給妖魔化。
Welcome to one of the central challenges of our time. How can we maintain an internet with freedom of expression at the core, while also ensuring that the content that's being disseminated doesn't cause irreparable harms to our democracies, our communities and to our physical and mental well-being? Because we live in the information age, yet the central currency upon which we all depend -- information -- is no longer deemed entirely trustworthy and, at times, can appear downright dangerous. This is thanks in part to the runaway growth of social sharing platforms that allow us to scroll through, where lies and facts sit side by side, but with none of the traditional signals of trustworthiness.
歡迎來到這個時代的核心挑戰之一。 我們要如何讓網路保有 自由發表意見的核心價值, 同時確保被散播出去的內容 不會對我們的民主、我們的社會、 我們的身心安康, 造成無法修復的傷害? 因為我們生活在資訊的時代, 但我們所有人仰賴的 中心貨幣——即,資訊—— 已經不再能被完全信任, 有些時候,甚至會造成危險。 這背後有部分原因是因為 社交分享平台的脫韁成長, 在那裡,謊言和事實 混在一起供我們瀏覽, 傳統的信任價值已蕩然無存。
And goodness -- our language around this is horribly muddled. People are still obsessed with the phrase "fake news," despite the fact that it's extraordinarily unhelpful and used to describe a number of things that are actually very different: lies, rumors, hoaxes, conspiracies, propaganda. And I really wish we could stop using a phrase that's been co-opted by politicians right around the world, from the left and the right, used as a weapon to attack a free and independent press.
且,老天,我們 在這方面的用語是一團亂。 大家對「假新聞」上癮, 這些新聞不但沒有幫助, 且常被用來包裝各式各樣的東西: 謊言、謠言、騙局、陰謀、洗腦文。 我真希望我們能不要 再用「假新聞」這詞了, 它已經被全世界政治人物汙染, 從左派到右派都一樣, 用來當作武器, 攻擊自由、獨立的媒體。
(Applause)
(掌聲)
Because we need our professional news media now more than ever. And besides, most of this content doesn't even masquerade as news. It's memes, videos, social posts. And most of it is not fake; it's misleading. We tend to fixate on what's true or false. But the biggest concern is actually the weaponization of context. Because the most effective disinformation has always been that which has a kernel of truth to it.
因為,比起過去,現在我們 更需要專業的新聞媒體。 此外,這類內容根本 還稱不上是新聞。 它們只是迷因、影片、社群貼文。 且它們大部分不是假的, 而是會誤導的。 我們總是想把事情二分為對或錯。 但最讓人擔憂的, 是把情境變成武器。 因為,最有效的虛假資訊 通常在核心還是有某些真相。
Let's take this example from London, from March 2017, a tweet that circulated widely in the aftermath of a terrorist incident on Westminster Bridge. This is a genuine image, not fake. The woman who appears in the photograph was interviewed afterwards, and she explained that she was utterly traumatized. She was on the phone to a loved one, and she wasn't looking at the victim out of respect. But it still was circulated widely with this Islamophobic framing, with multiple hashtags, including: #BanIslam. Now, if you worked at Twitter, what would you do? Would you take that down, or would you leave it up? My gut reaction, my emotional reaction, is to take this down. I hate the framing of this image. But freedom of expression is a human right, and if we start taking down speech that makes us feel uncomfortable, we're in trouble.
舉個倫敦的例子,2017 年三月, 在西敏橋的恐攻事件之後, 出現了一則廣為流傳的推文。 這是真實的照片,不是假的。 照片中的女子後來有被訪問, 她解釋說她心理受到很大的創傷。 她當時在和愛人講電話, 去被誤會成對受害者毫無同情心。 但這張照片仍然被扣上了 恐伊斯蘭的誣陷文字 之後廣為流傳, 還附上許多主題標籤, 包括 #禁伊斯蘭。 如果你在推特工作,你會怎麼做? 你會把這則推文拿掉? 還是留在上面? 我的直覺反應,我的 情緒反應,是把它拿掉。 我很不喜歡這張照片的說明。 但人權本來就包括表達的自由, 如果我們開始把讓我們不舒服的 言論都拿掉,就會有麻煩了。
And this might look like a clear-cut case, but, actually, most speech isn't. These lines are incredibly difficult to draw. What's a well-meaning decision by one person is outright censorship to the next. What we now know is that this account, Texas Lone Star, was part of a wider Russian disinformation campaign, one that has since been taken down. Would that change your view? It would mine, because now it's a case of a coordinated campaign to sow discord. And for those of you who'd like to think that artificial intelligence will solve all of our problems, I think we can agree that we're a long way away from AI that's able to make sense of posts like this.
這可能看起來是個很明確的案例, 但,大部分的言論沒這麼清楚。 因為那條界線非常難拿捏。 一個出於善意的舉動, 對其它人來說,可能就是無情的審查。 我們知道,Texas Lone Star (德州孤星)這個帳號 曾經是俄國用來散播 虛假資訊活動的帳號, 雖然事後被下架了。 但會改變你的觀點嗎? 會改變我的, 因為它是一個製造爭端的協作案例。 如果你認為人工智慧將會 解決我們所有的問題, 我想大家都同意, 我們還有很長的路要走, 才能讓人工智慧能夠 理解像這樣的貼文。
So I'd like to explain three interlocking issues that make this so complex and then think about some ways we can consider these challenges. First, we just don't have a rational relationship to information, we have an emotional one. It's just not true that more facts will make everything OK, because the algorithms that determine what content we see, well, they're designed to reward our emotional responses. And when we're fearful, oversimplified narratives, conspiratorial explanations and language that demonizes others is far more effective. And besides, many of these companies, their business model is attached to attention, which means these algorithms will always be skewed towards emotion.
所以我想解釋一下, 讓這件事變得如此複雜的 三個相關議題, 接著,想想我們要 如何處理這些挑戰。 首先,我們只是沒有和資訊 建立起理性的關係, 我們建立的是情緒化的關係。 並非有越多事實就能搞定一切, 因為,那些用來判定 我們想看什麼內容的演算法, 它們是設計來獎勵 我們的情緒反應。 當我們恐懼時,過度簡化的說詞、 陰謀論的解釋, 將他人妖魔化的用語, 都會更有效許多。 此外,許多這類公司的商業模型 和注意力息息相關, 也就是說,這些演算法 一定都會偏向情緒那一邊。
Second, most of the speech I'm talking about here is legal. It would be a different matter if I was talking about child sexual abuse imagery or content that incites violence. It can be perfectly legal to post an outright lie. But people keep talking about taking down "problematic" or "harmful" content, but with no clear definition of what they mean by that, including Mark Zuckerberg, who recently called for global regulation to moderate speech. And my concern is that we're seeing governments right around the world rolling out hasty policy decisions that might actually trigger much more serious consequences when it comes to our speech. And even if we could decide which speech to take up or take down, we've never had so much speech. Every second, millions of pieces of content are uploaded by people right around the world in different languages, drawing on thousands of different cultural contexts. We've simply never had effective mechanisms to moderate speech at this scale, whether powered by humans or by technology.
第二,我所說的這些言論, 大部分是合法的。 這完全不像是談論兒童性侵的意象 或者煽動暴力的內容。 張貼徹頭徹尾的謊言並不違法。 但大家不斷地討論「有問題的」 或「會造成傷害」的內容, 卻無法清楚定義他們指的是什麼, 就連最近呼籲建立全球規制 來管理言論的馬克祖柏格也一樣。 我擔心的是,我們看到 世界各地的政府 趕著推出政策決定, 在言論方面,這些決定 可能會觸發更嚴重的後果。 就算我們能判定哪些言論 可以放上網,哪些要拿掉, 我們以前也從未有這麼多言論。 每秒鐘就有數百萬則內容 被世界各地的人上傳到網路上, 用的是不同語言,背景是 數以千計不同的文化情境。 我們實在沒有任何有效的機制 可以管理這麼大規模的言論, 不論用人力或技術都辦不到。
And third, these companies -- Google, Twitter, Facebook, WhatsApp -- they're part of a wider information ecosystem. We like to lay all the blame at their feet, but the truth is, the mass media and elected officials can also play an equal role in amplifying rumors and conspiracies when they want to. As can we, when we mindlessly forward divisive or misleading content without trying. We're adding to the pollution.
第三,這些公司——Google、 推特、臉書、WhatsApp—— 它們都是廣闊資訊生態系統的一部分。 我們喜歡把所有責任 都推到它們身上,但事實是, 大眾媒體和民選官員 也都有同樣的影響力, 只要他們想,也可以在放大謠言和 陰謀上,發揮等同的影響力。 我們也是, 我們會無心且毫不費力地轉傳 造成分化或誤導的內容。 我們也在助長污染。
I know we're all looking for an easy fix. But there just isn't one. Any solution will have to be rolled out at a massive scale, internet scale, and yes, the platforms, they're used to operating at that level. But can and should we allow them to fix these problems? They're certainly trying. But most of us would agree that, actually, we don't want global corporations to be the guardians of truth and fairness online. And I also think the platforms would agree with that. And at the moment, they're marking their own homework. They like to tell us that the interventions they're rolling out are working, but because they write their own transparency reports, there's no way for us to independently verify what's actually happening.
我知道大家都想要 有簡單的解決方式。 但就是沒有。 要推出解決方式,就要以 極大的規模推出,網路的規模, 是的,平台就是在網路 這麼大的規模下運作的。 但我們能讓它們去解決這些問題嗎? 他們肯定有在試。 但我們大部分人都認同, 其實,我們並不希望全球大企業 成為網路真相和公平的守護者。 我認為平台也不希望如此。 此刻,他們是球員兼裁判。 他們想要告訴我們, 他們推出的干預方式奏效了, 但,因為他們的透明度 報告是自己撰寫的, 我們沒有辦法獨立地 去驗證發生了什麼事。
(Applause)
(掌聲)
And let's also be clear that most of the changes we see only happen after journalists undertake an investigation and find evidence of bias or content that breaks their community guidelines. So yes, these companies have to play a really important role in this process, but they can't control it.
另外要說清楚一件事, 我們看到的大部分改變發生的時機, 都是在記者進行調查 並找到偏見的證據 或違反業界原則的內容之後。 所以,是的,在這個過程中, 這些企業扮演了很重要的角色, 但他們不能控制言論。
So what about governments? Many people believe that global regulation is our last hope in terms of cleaning up our information ecosystem. But what I see are lawmakers who are struggling to keep up to date with the rapid changes in technology. And worse, they're working in the dark, because they don't have access to data to understand what's happening on these platforms. And anyway, which governments would we trust to do this? We need a global response, not a national one.
那政府呢? 許多人認為,全球規制 是我們最後的希望, 清理資訊生態系統的最後希望。 但我看到立法者 非常難跟上科技的快速改變。 更糟的是,他們還在瞎子摸象, 因為他們無法取得資料 來了解這些平台發生了什麼事。 總之,在這件事情上, 我們能相信哪個政府? 我們需要的是全球性的 因應,而非全國性的。
So the missing link is us. It's those people who use these technologies every day. Can we design a new infrastructure to support quality information? Well, I believe we can, and I've got a few ideas about what we might be able to actually do. So firstly, if we're serious about bringing the public into this, can we take some inspiration from Wikipedia? They've shown us what's possible. Yes, it's not perfect, but they've demonstrated that with the right structures, with a global outlook and lots and lots of transparency, you can build something that will earn the trust of most people. Because we have to find a way to tap into the collective wisdom and experience of all users. This is particularly the case for women, people of color and underrepresented groups. Because guess what? They are experts when it comes to hate and disinformation, because they have been the targets of these campaigns for so long. And over the years, they've been raising flags, and they haven't been listened to. This has got to change. So could we build a Wikipedia for trust? Could we find a way that users can actually provide insights? They could offer insights around difficult content-moderation decisions. They could provide feedback when platforms decide they want to roll out new changes.
這當中遺失的環節就是我們。 就是每天使用這些科技的人。 我們能否設計新的基礎結構 來支持有品質的資訊? 我相信可以, 對於我們能做什麼, 我有幾個想法。 首先,如果我們當真 想要讓大眾介入, 我們能否從維基百科來找靈感? 他們讓我們看到了可能性。 是的,它並不完美,但它證明了 有對的架構,有全球的目光, 有很高的透明度, 就能建造出讓多數人信賴的東西。 因為我們得要找到方法,去挖掘 所有使用者的集體智慧和經驗。 對於女性、有色人種, 及少數弱勢族群更是如此。 因為,你猜如何?他們是 仇恨和虛假資訊的專家, 因為長久以來他們都是 這些活動的目標對象。 這些年來,他們一直 都在小心提防, 但沒有人傾聽他們。 這點必須要改變。 我們能建造出信任的維基百科嗎? 我們能否找到方法, 讓使用者真正提供洞見? 他們可以針對很困難的 內容管理決策來提供洞見。 當平台決定想要推出新改變時, 他們可以回饋意見。
Second, people's experiences with the information is personalized. My Facebook news feed is very different to yours. Your YouTube recommendations are very different to mine. That makes it impossible for us to actually examine what information people are seeing. So could we imagine developing some kind of centralized open repository for anonymized data, with privacy and ethical concerns built in? Because imagine what we would learn if we built out a global network of concerned citizens who wanted to donate their social data to science. Because we actually know very little about the long-term consequences of hate and disinformation on people's attitudes and behaviors. And what we do know, most of that has been carried out in the US, despite the fact that this is a global problem. We need to work on that, too.
第二,人對於資訊的體驗 是很個人化的。 我的臉書新聞饋送內容 和你的非常不一樣。 你的 YouTube 推薦 和我的非常不一樣。 因此我們不可能去真正檢查 大家看到了什麼資訊。 我們能否想像開發一種 中央化的開放貯藏庫, 用來存放匿名的資料, 且把隱私和倫理考量 都內建在其中? 想想我們能夠學到多少, 若我們能打造一個全球網路, 由在乎的公民構成, 他們想要把他們的 社交資料貢獻給科學。 因為我們幾乎不了解 仇恨和虛假資訊對於人的態度 和行為會有什麼長期後果。 我們確實知道的是, 雖然這是個全球性的問題, 卻大部分發生在美國。 我們也得處理這一點。
And third, can we find a way to connect the dots? No one sector, let alone nonprofit, start-up or government, is going to solve this. But there are very smart people right around the world working on these challenges, from newsrooms, civil society, academia, activist groups. And you can see some of them here. Some are building out indicators of content credibility. Others are fact-checking, so that false claims, videos and images can be down-ranked by the platforms.
第三,我們能不能 想辦法把點連起來? 這個問題不是單一部門能解決的, 更別說非營利組織、 新創公司、政府。 但全世界都有非常聰明的人 在處理這些難題, 他們可能在新聞編輯部、 公民社會、學術圈、活動團體裡。 在這裡也有一些。 有些人在建造內容可信度指標。 也有人在做事實查核, 讓虛假的主張、影片, 和影像在平台上的排名下降。
A nonprofit I helped to found, First Draft, is working with normally competitive newsrooms around the world to help them build out investigative, collaborative programs. And Danny Hillis, a software architect, is designing a new system called The Underlay, which will be a record of all public statements of fact connected to their sources, so that people and algorithms can better judge what is credible. And educators around the world are testing different techniques for finding ways to make people critical of the content they consume. All of these efforts are wonderful, but they're working in silos, and many of them are woefully underfunded.
我協助成立了非營利組織 「第一版草稿」, 它和全球各地平時會互相 競爭的新聞編輯室合作, 目的在協助他們建造 調查性的協同作業方案。 軟體架構工程師丹尼希利斯 在設計一個新系統, 叫做「The Underlay」, 它能記錄事實的所有公開陳述, 並和其源頭連結, 供人及演算法參考, 更能判斷資訊的可信度。 世界各地的教育家 正在測試不同的技術, 找方法讓大家可以對自己 觀看的內容進行批判。 這些做法都很棒,但他們 都在自己的地窖中努力, 很遺憾的是,很多人還資金不足。
There are also hundreds of very smart people working inside these companies, but again, these efforts can feel disjointed, because they're actually developing different solutions to the same problems.
還有數百名非常聰明的人 在這些公司中工作, 但,同樣的,他們做的努力 似乎也沒有相連結, 因為他們是針對同樣的問題 在開發不同的解決方案。
How can we find a way to bring people together in one physical location for days or weeks at a time, so they can actually tackle these problems together but from their different perspectives? So can we do this? Can we build out a coordinated, ambitious response, one that matches the scale and the complexity of the problem? I really think we can. Together, let's rebuild our information commons.
我們要如何想辦法集合大家, 於特定時間,在實體地點 聚會幾天或幾週, 讓他們能真正從他們不同的觀點 來一起處理這些問題? 我們能做到嗎? 我們能否建造一個組織良好 且有野心的因應方式, 且能夠符合問題的規模和複雜度? 我真心認為可以。 讓我們同心協力一起重建 我們對資訊的共識。
Thank you.
謝謝。
(Applause)
(掌聲)