Chris Anderson: What worries you right now? You've been very open about lots of issues on Twitter. What would be your top worry about where things are right now?
克里斯•安德森(安): 你目前擔心的是什麼? 你對許多推特爭議持開放的態度。 對於現狀你最擔心的是什麼?
Jack Dorsey: Right now, the health of the conversation. So, our purpose is to serve the public conversation, and we have seen a number of attacks on it. We've seen abuse, we've seen harassment, we've seen manipulation, automation, human coordination, misinformation. So these are all dynamics that we were not expecting 13 years ago when we were starting the company. But we do now see them at scale, and what worries me most is just our ability to address it in a systemic way that is scalable, that has a rigorous understanding of how we're taking action, a transparent understanding of how we're taking action and a rigorous appeals process for when we're wrong, because we will be wrong.
傑克•多爾西(多): 目前擔心對話是否良性。 我們致力於為公眾提供對話平臺, 但也看到對此的攻擊不斷。 像是那些濫用、騷擾、操弄、 自動化、人類協作、錯誤資訊。 這並非 13 年前我們 創立時,所預期的變化。 但這些現象現已不容小覷, 而最讓我擔心的是,我們是否有能力 大規模、系統性地處理這些狀況, 能有一套嚴謹的系統、 透徹地瞭解我們該如何採取行動, 當我們犯錯時, 也有嚴謹的上訴程序, 因為我們一定會犯錯。
Whitney Pennington Rodgers: I'm really glad to hear that that's something that concerns you, because I think there's been a lot written about people who feel they've been abused and harassed on Twitter, and I think no one more so than women and women of color and black women. And there's been data that's come out -- Amnesty International put out a report a few months ago where they showed that a subset of active black female Twitter users were receiving, on average, one in 10 of their tweets were some form of harassment. And so when you think about health for the community on Twitter, I'm interested to hear, "health for everyone," but specifically: How are you looking to make Twitter a safe space for that subset, for women, for women of color and black women?
惠妮•潘尼頓•羅傑斯(羅): 我很高興聽到 你關心這個議題, 因為常看到有人寫到 他們覺得自己在推特上 被辱罵和騷擾, 最常遇到這種事的 就是女性、有色人種女性 和黑人女性。 已經有些資料顯示—— 幾個月前,國際特赦組織發表報告, 報告顯示,部分活躍使用 推特的黑人女性中, 平均每十則回覆中就有一則 是某種形式的騷擾。 當想到推特社群的良性發展, 我很想要聽到「人人良性對話」, 但明確地說:你打算如何 將推特變成安全的空間, 尤其是對那些少數,包含女性、 有色人種的女性及黑人女性而言?
JD: Yeah. So it's a pretty terrible situation when you're coming to a service that, ideally, you want to learn something about the world, and you spend the majority of your time reporting abuse, receiving abuse, receiving harassment. So what we're looking most deeply at is just the incentives that the platform naturally provides and the service provides. Right now, the dynamic of the system makes it super-easy to harass and to abuse others through the service, and unfortunately, the majority of our system in the past worked entirely based on people reporting harassment and abuse. So about midway last year, we decided that we were going to apply a lot more machine learning, a lot more deep learning to the problem, and try to be a lot more proactive around where abuse is happening, so that we can take the burden off the victim completely. And we've made some progress recently. About 38 percent of abusive tweets are now proactively identified by machine learning algorithms so that people don't actually have to report them. But those that are identified are still reviewed by humans, so we do not take down content or accounts without a human actually reviewing it. But that was from zero percent just a year ago. So that meant, at that zero percent, every single person who received abuse had to actually report it, which was a lot of work for them, a lot of work for us and just ultimately unfair.
多:是的。 這是個很糟的情況, 當你使用一項服務, 理想情況下, 只是想了解世界資訊, 但你卻花了多數的時間於: 檢舉或受到辱罵、 被人騷擾。 我們在探究的是 由平臺及服務所產生的誘因。 目前系統的互動模式,讓使用者 被輕易地利用來騷擾和辱罵他人, 不幸的是,過去系統的運作, 絕大部分都依賴使用者 主動檢舉騷擾以及濫用。 大約在去年, 我們決定針對這個問題 採用更多的機器學習、深度學習, 並嘗試在發生濫用前先發制人, 替受害者完全卸下負擔。 最近我們有些進展。 機器學習演算法可以主動找出 大約 38% 的濫用推文, 使用者不需要主動檢舉這些推文。 但被識別出來的濫用推文 會再由人工複審, 在人工複審前,我們不會 刪除任何的推文或帳號。 這比一年前的 0% 大有進步。 意思就是,在 0% 時, 每個收到辱罵的人都得主動檢舉, 對他們和我們來說都很麻煩, 歸根究柢就是很不公平。
The other thing that we're doing is making sure that we, as a company, have representation of all the communities that we're trying to serve. We can't build a business that is successful unless we have a diversity of perspective inside of our walls that actually feel these issues every single day. And that's not just with the team that's doing the work, it's also within our leadership as well. So we need to continue to build empathy for what people are experiencing and give them better tools to act on it and also give our customers a much better and easier approach to handle some of the things that they're seeing. So a lot of what we're doing is around technology, but we're also looking at the incentives on the service: What does Twitter incentivize you to do when you first open it up? And in the past, it's incented a lot of outrage, it's incented a lot of mob behavior, it's incented a lot of group harassment. And we have to look a lot deeper at some of the fundamentals of what the service is doing to make the bigger shifts. We can make a bunch of small shifts around technology, as I just described, but ultimately, we have to look deeply at the dynamics in the network itself, and that's what we're doing.
我們做的另外一件事 是確保作為一個企業 被服務的所有群體皆有其代表。 我們要打造一個成功的企業, 內部就必須擁有多元觀點, 且每天都深切地體會這些爭議。 不只是前端負責 執行的團隊要夠多元, 領導階層也一樣。 我們要對人們的感受保有同理心 並為客戶提供更好的工具, 還有更優質、更簡易的方法, 來處理他們所目睹的事情。 我們做的事大多環繞著科技, 但我們也在研究因服務產生的誘因: 當你一打開推特, 它誘使你去做什麼? 在過去, 推特誘導了許多紛爭、 暴行、集體騷擾。 我們得更深入地探討 服務內容的本質, 才能做重大調整。 我們可以在技術上 做許多小改變,如剛所敘, 但最終我們得更深入 觀察網絡裡的動態, 那正是我們在做的。
CA: But what's your sense -- what is the kind of thing that you might be able to change that would actually fundamentally shift behavior?
安:你的看法是什麼—— 你可能做出什麼樣的改變, 才能從根本上改變使用者行為呢?
JD: Well, one of the things -- we started the service with this concept of following an account, as an example, and I don't believe that's why people actually come to Twitter. I believe Twitter is best as an interest-based network. People come with a particular interest. They have to do a ton of work to find and follow the related accounts around those interests. What we could do instead is allow you to follow an interest, follow a hashtag, follow a trend, follow a community, which gives us the opportunity to show all of the accounts, all the topics, all the moments, all the hashtags that are associated with that particular topic and interest, which really opens up the perspective that you see. But that is a huge fundamental shift to bias the entire network away from just an account bias towards a topics and interest bias.
多:其中一樣—— 舉例來說,我們推出了一種服務, 概念是跟隨帳號, 我不認為這是大家 使用推特的原因。 我認為推特最好定位為 以興趣為基礎的網絡。 大家因為某種特定的興趣而來。 他們得要花很多心力 才能找到並跟隨 和那些興趣相關的帳號。 我們可以換一種方式, 讓你跟隨一種興趣, 跟隨一個「#主題標籤」、一種趨勢、 跟隨一個社群, 這樣我們就有機會 呈現所有相關的帳號、 所有相關的主題、時刻、「#主題標籤」, 只要是和那個主題及興趣 有關的都會呈現出來, 這真的能夠開拓你的視野。 但這樣大幅度的結構性轉變 讓整個平臺從偏重於帳號 轉而偏向於主題和興趣 。
CA: Because isn't it the case that one reason why you have so much content on there is a result of putting millions of people around the world in this kind of gladiatorial contest with each other for followers, for attention? Like, from the point of view of people who just read Twitter, that's not an issue, but for the people who actually create it, everyone's out there saying, "You know, I wish I had a few more 'likes,' followers, retweets." And so they're constantly experimenting, trying to find the path to do that. And what we've all discovered is that the number one path to do that is to be some form of provocative, obnoxious, eloquently obnoxious, like, eloquent insults are a dream on Twitter, where you rapidly pile up -- and it becomes this self-fueling process of driving outrage. How do you defuse that?
安:現實情況不是這樣的嗎? 推特之所以有那麼多內容的原因之一 不就是將全世界幾百萬人放在一個 類似競技場式的比賽 去競爭跟隨者、關注? 對只看而不發推特的人來說, 這不是問題, 但對創造內容的人而言, 許多人會說: 「我真希望有多一點 喜歡、跟隨者、轉推。」 所以他們經常在做實驗, 試著找出方法達到目的。 而我們所發現的是最常見途徑就是: 帶有挑釁成分的、 罵起人來頭頭是道, 彷彿,頭頭是道的辱罵, 是人們在推特的目標。 這樣便能快速傳播—— 而這變成一種導致憤怒的惡性循環。 你要如何平息這個狀況?
JD: Yeah, I mean, I think you're spot on, but that goes back to the incentives. Like, one of the choices we made in the early days was we had this number that showed how many people follow you. We decided that number should be big and bold, and anything that's on the page that's big and bold has importance, and those are the things that you want to drive. Was that the right decision at the time? Probably not. If I had to start the service again, I would not emphasize the follower count as much. I would not emphasize the "like" count as much. I don't think I would even create "like" in the first place, because it doesn't actually push what we believe now to be the most important thing, which is healthy contribution back to the network and conversation to the network, participation within conversation, learning something from the conversation. Those are not things that we thought of 13 years ago, and we believe are extremely important right now.
多:是的,你切中核心, 但這又回到誘因。 在早期,我們所做的選擇之一 就是突出你的跟隨者人數。 我們決定讓那個數字 要用粗體大字呈現, 網頁上只要用到粗體大字, 通常都是重要的, 那些就會是你想要追求的東西。 在那時,那個決策正確嗎? 可能不正確。 如果能重頭來過, 我不會這麼強調跟隨者人數。 我不會這麼強調「喜歡」的數目。 我甚至根本不會創造 「喜歡」這個功能, 因為這個功能無法真的 推動我們現在最重要的理念, 也就是對網路做良性的回饋貢獻, 以及與網路對話、 參與對話, 從對話中學習。 我們 13 年前沒有想到這些, 現在我們認為這些非常重要。
So we have to look at how we display the follower count, how we display retweet count, how we display "likes," and just ask the deep question: Is this really the number that we want people to drive up? Is this the thing that, when you open Twitter, you see, "That's the thing I need to increase?" And I don't believe that's the case right now.
所以我們得要研究 如何呈現跟隨者人數、 如何呈現轉推數, 如何呈現「喜歡」, 並深思: 我們真的希望大家 把這個數字衝高嗎? 我們真的希望當你打開推特時, 你會說「我得要增加這個數字」嗎? 我不認為是這樣的。
(Applause)
(掌聲)
WPR: I think we should look at some of the tweets that are coming in from the audience as well.
羅:我們來看一些觀眾的推文。
CA: Let's see what you guys are asking. I mean, this is -- generally, one of the amazing things about Twitter is how you can use it for crowd wisdom, you know, that more knowledge, more questions, more points of view than you can imagine, and sometimes, many of them are really healthy.
安:來看看大家想問什麼。 一般來說,這是推特 很棒的優點之一, 可以把它用在群眾智慧上, 像是:更多知識、問題、觀點, 超越想像, 有時,許多內容是非常良性的。
WPR: I think one I saw that passed already quickly down here, "What's Twitter's plan to combat foreign meddling in the 2020 US election?" I think that's something that's an issue we're seeing on the internet in general, that we have a lot of malicious automated activity happening. And on Twitter, for example, in fact, we have some work that's come from our friends at Zignal Labs, and maybe we can even see that to give us an example of what exactly I'm talking about, where you have these bots, if you will, or coordinated automated malicious account activity, that is being used to influence things like elections. And in this example we have from Zignal which they've shared with us using the data that they have from Twitter, you actually see that in this case, white represents the humans -- human accounts, each dot is an account. The pinker it is, the more automated the activity is. And you can see how you have a few humans interacting with bots. In this case, it's related to the election in Israel and spreading misinformation about Benny Gantz, and as we know, in the end, that was an election that Netanyahu won by a slim margin, and that may have been in some case influenced by this. And when you think about that happening on Twitter, what are the things that you're doing, specifically, to ensure you don't have misinformation like this spreading in this way, influencing people in ways that could affect democracy?
羅:有一則訊息進來: 「推特對抗外國干預 2020 年 美國大選的計畫是什麼?」 這是我們在網路上 一般都會看見的議題, 有許多惡意的、自動化的活動。 在推特上,比如,我們有 我們的朋友 Zignal 實驗室的作品, 也許我們能以此為例, 闡述我的觀點, 有些機器人, 或自動化協同的惡意帳號活動, 被用來影響選舉等等。 這個例子是 Zignal 和我們分享的, 他們使用來自推特的資料, 各位可以看見, 在這裡白色代表人—— 人的帳戶,每一個點是一個帳戶。 顏色越粉紅, 就表示越多自動化的活動。 可以看見有些人和機器人互動。 在這個例子中,這些活動 和以色列大選有關, 散播關於本尼•甘茨的不實資訊, 我們知道, 最後尼坦雅胡以些微差距勝出, 那結果可能受到推特的影響。 想到這樣的事發生在推特, 你會採取哪些明確的措施, 以確保不會有錯誤資訊 藉此散播出去, 影響到大家,進而影響到民主?
JD: Just to back up a bit, we asked ourselves a question: Can we actually measure the health of a conversation, and what does that mean? And in the same way that you have indicators and we have indicators as humans in terms of are we healthy or not, such as temperature, the flushness of your face, we believe that we could find the indicators of conversational health. And we worked with a lab called Cortico at MIT to propose four starter indicators that we believe we could ultimately measure on the system. And the first one is what we're calling shared attention. It's a measure of how much of the conversation is attentive on the same topic versus disparate. The second one is called shared reality, and this is what percentage of the conversation shares the same facts -- not whether those facts are truthful or not, but are we sharing the same facts as we converse? The third is receptivity: How much of the conversation is receptive or civil or the inverse, toxic? And then the fourth is variety of perspective. So, are we seeing filter bubbles or echo chambers, or are we actually getting a variety of opinions within the conversation? And implicit in all four of these is the understanding that, as they increase, the conversation gets healthier and healthier.
多:先倒帶一下, 我們會自問: 我們真的能測量良性對話的程度嗎? 良性對話是什麼意思? 就像你有指標, 有指標可以表示人類是否健康, 比如體溫、面色紅潤度, 我們相信能找到良性對話的指標。 我們和麻省理工學院的 Cortico 實驗室合作, 提出了四個初始指標, 我們相信,最終能在系統中測量。 第一個指標我們稱之為「共同焦點」。 它計算的是: 人們對話的主題是集中還是分散。 第二個指標叫「共同依據」, 它指的是對話有多少比例 來自同樣的依據—— 不論那些依據是否屬實, 只看我們對話時 是否有共同的依據。 第三個指標是「接納歧見」: 對話是否是開放、文明有禮的, 或反過來說,是令人無法接受的? 第四個指標是「多元觀點」。 我們是否身處同溫層,或迴聲室? 或我們真的能在對話中 聽見多元的聲音? 這四個指標蘊含的是: 指標越是提升,對話越顯良性。
So our first step is to see if we can measure these online, which we believe we can. We have the most momentum around receptivity. We have a toxicity score, a toxicity model, on our system that can actually measure whether you are likely to walk away from a conversation that you're having on Twitter because you feel it's toxic, with some pretty high degree. We're working to measure the rest, and the next step is, as we build up solutions, to watch how these measurements trend over time and continue to experiment. And our goal is to make sure that these are balanced, because if you increase one, you might decrease another. If you increase variety of perspective, you might actually decrease shared reality.
第一步就是嘗試能否 實際測量這些指標, 我們相信是可行的。 「接納歧見」的指標最有具動能。 系統裡有不悅度分數及模型, 它可以測量你是否有可能會離開 你在推特上參與的對話, 因為它讓你感到不悅, 這個指標已達相當水準。 我們正致力於其它指標的測量, 下一步是, 當我們在建立解決方案時, 去觀察這些測量值的變化趨勢, 並持續地實驗。 目標是要確保這些 指標之間達到平衡, 因為如果有一項增加, 可能讓另一項減少。 如果「多元觀點」提升了, 同時亦可能減少「共同依據」。
CA: Just picking up on some of the questions flooding in here.
安:從湧入的問題中挑一個。
JD: Constant questioning.
多:不間斷的問題。
CA: A lot of people are puzzled why, like, how hard is it to get rid of Nazis from Twitter?
安:很多人不明白, 像是讓極端種族主義分子的言論 從推特上消失有多難?
JD: (Laughs)
多:(笑)
So we have policies around violent extremist groups, and the majority of our work and our terms of service works on conduct, not content. So we're actually looking for conduct. Conduct being using the service to repeatedly or episodically harass someone, using hateful imagery that might be associated with the KKK or the American Nazi Party. Those are all things that we act on immediately. We're in a situation right now where that term is used fairly loosely, and we just cannot take any one mention of that word accusing someone else as a factual indication that they should be removed from the platform. So a lot of our models are based around, number one: Is this account associated with a violent extremist group? And if so, we can take action. And we have done so on the KKK and the American Nazi Party and others. And number two: Are they using imagery or conduct that would associate them as such as well?
針對暴力極端團體 我們有相關的政策, 我們主要的工作及服務條款 是在處理行為,而非內容。 所以我們其實在尋找特定行為。 比如是使用服務 來重覆或不定期騷擾某人, 使用仇恨的意象, 可能是與三 K 黨有關, 或是美國納粹黨。 那些都是我們可以立即處理的。 目前的情況是大家 並不嚴謹地使用那個字, 我們實在無法只因為 那個字被提起一次, 就指控某個人 說有事實證據指出 他應該被踢出平臺。 我們許多模型的根據都是,第一: 這個帳號和暴力極端團體有關聯嗎? 如果有,我們就能採取行動。 我們已對三 K 黨、美國 納粹黨,及其他團體施行。 第二:他們是否也有採用 和這些團體相關的意象或是行為?
CA: How many people do you have working on content moderation to look at this?
安:有多少員工負責審核內容 以觀察這些行為?
JD: It varies. We want to be flexible on this, because we want to make sure that we're, number one, building algorithms instead of just hiring massive amounts of people, because we need to make sure that this is scalable, and there are no amount of people that can actually scale this. So this is why we've done so much work around proactive detection of abuse that humans can then review. We want to have a situation where algorithms are constantly scouring every single tweet and bringing the most interesting ones to the top so that humans can bring their judgment to whether we should take action or not, based on our terms of service.
多:不一定。 這方面我們想要保持彈性, 因為我們想要確保,第一, 我們要運用演算法, 而非僅透過大量人力, 因為我們必須確保 作法是可以大範圍使用的, 再多的人力也不足以大規模擴展。 這是我們費盡心力做 主動偵測濫用的原因, 接著讓人類來審查。 我們想要做到讓 演算法能不斷搜索每一則推文, 把最值得注意的推文置頂, 這麼一來,人類可以去判斷 我們是否要採取行動, 依據服務條款來決定。
WPR: But there's not an amount of people that are scalable, but how many people do you currently have monitoring these accounts, and how do you figure out what's enough?
羅:大量人力 對系統性擴展並無幫助, 但目前有多少人在監控這些帳戶? 又如何判斷已經足夠了?
JD: They're completely flexible. Sometimes we associate folks with spam. Sometimes we associate folks with abuse and harassment. We're going to make sure that we have flexibility in our people so that we can direct them at what is most needed. Sometimes, the elections. We've had a string of elections in Mexico, one coming up in India, obviously, the election last year, the midterm election, so we just want to be flexible with our resources. So when people -- just as an example, if you go to our current terms of service and you bring the page up, and you're wondering about abuse and harassment that you just received and whether it was against our terms of service to report it, the first thing you see when you open that page is around intellectual property protection. You scroll down and you get to abuse, harassment and everything else that you might be experiencing.
多:這部分彈性很大。 有時,我們讓大家去處理濫發廣告。 有時,我們讓大家去處理辱罵和騷擾。 我們要確保人員的彈性, 讓我們可以把他們導到 最需要的地方, 有時,是選舉。 在墨西哥有一連串的選舉, 印度馬上也會有一場選舉, 很顯然,還有去年的美國期中選舉, 我們希望能夠彈性運用資源。 所以,當人們—— 舉例,如果去看目前的服務條款, 把網頁打開來, 你會想說,你遭受到的辱罵和騷擾 是否有違反我們的服務條款, 這樣你就可以舉報, 而你打開頁面之後首先看到的 和智慧財產權保護有關。 向下拉,就會看到辱罵、騷擾, 及其他你可能經歷到的狀況。
So I don't know how that happened over the company's history, but we put that above the thing that people want the most information on and to actually act on. And just our ordering shows the world what we believed was important. So we're changing all that. We're ordering it the right way, but we're also simplifying the rules so that they're human-readable so that people can actually understand themselves when something is against our terms and when something is not. And then we're making -- again, our big focus is on removing the burden of work from the victims. So that means push more towards technology, rather than humans doing the work -- that means the humans receiving the abuse and also the humans having to review that work. So we want to make sure that we're not just encouraging more work around something that's super, super negative, and we want to have a good balance between the technology and where humans can actually be creative, which is the judgment of the rules, and not just all the mechanical stuff of finding and reporting them. So that's how we think about it.
我不知道在公司歷史上 這是如何發展出來的, 但我們會把大家最需要的資訊 拉到頁面上方,並真的採取行動。 順序就是在昭告天下 我們的價值觀。 我們在改變這些規則。 我們在做正確的排序, 但我們也把規則簡化, 讓人們容易閱讀, 讓大家能夠真的靠自己去瞭解 什麼狀況和我們的條款 有抵觸,什麼狀況沒有。 接著,我們在—— 同前述,我們很重視 不要讓受害者承擔。 意味著採用更多技術, 而不是讓人類來做這些工作—— 那包括了受到辱罵的人, 以及要做審查工作的人。 我們想要確保: 我們並非只鼓勵花更多心力 在極度負面的事物上, 而是在運用科技及發揮 人類創意之間取得平衡, 像是人類對規則的判斷, 而不只是所有機械化的方法 去找出、舉報它們。 那是我們的看法。
CA: I'm curious to dig in more about what you said. I mean, I love that you said you are looking for ways to re-tweak the fundamental design of the system to discourage some of the reactive behavior, and perhaps -- to use Tristan Harris-type language -- engage people's more reflective thinking. How far advanced is that? What would alternatives to that "like" button be?
安:我想要進一步 討論你剛所說的。 我很喜歡你說你們在想辦法 重新調整系統的基本設計, 來阻止一些反應性行為,也許—— 用崔斯坦.哈里斯的話說—— 促使大家做更多反思。 這部分的進展如何? 「喜歡」按鈕的替代方案會像什麼?
JD: Well, first and foremost, my personal goal with the service is that I believe fundamentally that public conversation is critical. There are existential problems facing the world that are facing the entire world, not any one particular nation-state, that global public conversation benefits. And that is one of the unique dynamics of Twitter, that it is completely open, it is completely public, it is completely fluid, and anyone can see any other conversation and participate in it. So there are conversations like climate change. There are conversations like the displacement in the work through artificial intelligence. There are conversations like economic disparity. No matter what any one nation-state does, they will not be able to solve the problem alone. It takes coordination around the world, and that's where I think Twitter can play a part.
多:首先,也是最重要的, 對這項服務,我個人的 目標是,基本上我相信, 公眾對話是至關重要的。 現今全世界一同 面臨攸關存亡的問題, 並不僅限於特定的民族國家, 因此全球公眾對話能從中受益。 那是推特的獨特動力之一, 它是完全開放的、 它是完全公開的、 它是完全流動的、 任何人都能看見並參與任何對話。 有些話題是氣候變遷、 有些話題是工作被人工智慧取代、 有些話題是貧富差距。 不論任何單一民族國家怎麼做, 都無法獨自解決問題。 需要全世界的協調, 而我認為推特此時便能發揮影響力。
The second thing is that Twitter, right now, when you go to it, you don't necessarily walk away feeling like you learned something. Some people do. Some people have a very, very rich network, a very rich community that they learn from every single day. But it takes a lot of work and a lot of time to build up to that. So we want to get people to those topics and those interests much, much faster and make sure that they're finding something that, no matter how much time they spend on Twitter -- and I don't want to maximize the time on Twitter, I want to maximize what they actually take away from it and what they learn from it, and --
第二,目前你去用推特時, 你不見得會在離開時 覺得你獲得了新知。 有些人會, 有些人擁有非常豐沛的人脈、 非常優質的社群,讓他們 每天都能從中學到東西。 但要很多時間心力才能建立起來。 我們想讓大家以更快的速度, 找到那些主題、興趣, 並確保他們能找到東西, 不論人們花多少時間在推特, 我想要最大化的 不是人們花在推特的時間, 而是人們從推特帶走的東西, 讓人們學到東西……
CA: Well, do you, though? Because that's the core question that a lot of people want to know. Surely, Jack, you're constrained, to a huge extent, by the fact that you're a public company, you've got investors pressing on you, the number one way you make your money is from advertising -- that depends on user engagement. Are you willing to sacrifice user time, if need be, to go for a more reflective conversation?
安:但你能這麼做嗎? 那是許多人想知道的核心問題, 當然,傑克,你受到 很大程度的限制, 因為你們是上市公司, 你有來自投資者的壓力, 廣告是最大的收入來源. 那和使用者參與程度有關。 必要的話,你是否願意 犧牲使用者停留時間, 來換取更具反思性的談話?
JD: Yeah; more relevance means less time on the service, and that's perfectly fine, because we want to make sure that, like, you're coming to Twitter, and you see something immediately that you learn from and that you push. We can still serve an ad against that. That doesn't mean you need to spend any more time to see more. The second thing we're looking at --
多:會,若高相關性 意味更短的停留時間, 那完全沒問題, 因為我們想要確保的是, 當你上了推特, 可以馬上看到某樣東西, 能從中學習,並會轉推。 我們仍然能於此情境下搭配廣告。 那並不表示需要 花更多時間、看更多東西。 我們在研究的第二件事——
CA: But just -- on that goal, daily active usage, if you're measuring that, that doesn't necessarily mean things that people value every day. It may well mean things that people are drawn to like a moth to the flame, every day. We are addicted, because we see something that pisses us off, so we go in and add fuel to the fire, and the daily active usage goes up, and there's more ad revenue there, but we all get angrier with each other. How do you define ... "Daily active usage" seems like a really dangerous term to be optimizing.
安:單就那個目標, 每日活躍使用量, 如果你要測量它,它並不見得是 人們真正重視的價值。 它也有可能是 能把大家像是飛蛾撲火般 吸引過去的東西。 我們會上癮,是因為 看到惹毛我們的東西, 我們就會上推特,去火上加油, 而每日活躍使用量就會上升, 那就是更多的廣告收入, 但大家對彼此更生氣。 你要如何定義…… 若要最佳化「每日活躍 使用量」,似乎挺危險的。
(Applause)
(掌聲)
JD: Taken alone, it is, but you didn't let me finish the other metric, which is, we're watching for conversations and conversation chains. So we want to incentivize healthy contribution back to the network, and what we believe that is is actually participating in conversation that is healthy, as defined by those four indicators I articulated earlier.
多:單獨看的確是, 但你還沒讓我講完另一項度量, 那就是,我們在留意對話及對話鏈。 我們想要提供不同的激勵, 讓良性對話回這個平臺, 並鼓勵人們參與良性對話, 所謂的良性,就是由 前述的四個指標所定義。
So you can't just optimize around one metric. You have to balance and look constantly at what is actually going to create a healthy contribution to the network and a healthy experience for people. Ultimately, we want to get to a metric where people can tell us, "Hey, I learned something from Twitter, and I'm walking away with something valuable." That is our goal ultimately over time, but that's going to take some time.
不能只將一項度量給最佳化。 得要平衡,且經常要注意 什麼能夠對這個網路 創造出良性的貢獻, 為大家創造正向的體驗。 最終我們想要達到的標準是 大家能告訴我們:「嘿, 我從推特學到了些東西, 我帶著有價值的東西離開。」 那是我們的最終目標, 但那會花不少時間。
CA: You come over to many, I think to me, as this enigma. This is possibly unfair, but I woke up the other night with this picture of how I found I was thinking about you and the situation, that we're on this great voyage with you on this ship called the "Twittanic" --
安:對許多人來說你是個謎, 對我來說是如此。 這樣說可能不公平, 但有天晚上我醒來, 腦中的畫面是我 想著你和目前的狀態, 我們和你一同航向偉大的旅程, 在艘名為「推達尼號」的船上,
(Laughter)
(笑聲)
and there are people on board in steerage who are expressing discomfort, and you, unlike many other captains, are saying, "Well, tell me, talk to me, listen to me, I want to hear." And they talk to you, and they say, "We're worried about the iceberg ahead." And you go, "You know, that is a powerful point, and our ship, frankly, hasn't been built properly for steering as well as it might." And we say, "Please do something." And you go to the bridge, and we're waiting, and we look, and then you're showing this extraordinary calm, but we're all standing outside, saying, "Jack, turn the fucking wheel!"
船上有些在最低價艙等的人, 他們表示感到不舒服, 而你和其他船長不同, 你說:「告訴我,跟我說, 聽我說,我想要知道。」 他們跟你談,他們說: 「我們很擔心前面的冰山。」 你說:「那是很有力的論點, 老實說,我們的船並沒有妥善製造, 無法操作得如預期理想。」 我們說:「請做些什麼。」 你到了駕駛臺, 我們在等待, 我們看著,你表現出超凡的冷靜, 但我們都站在外面說: 「傑克,轉那他媽的舵!」
You know?
你知道嗎?
(Laughter)
(笑聲)
(Applause)
(掌聲)
I mean --
我是說——
(Applause)
(掌聲)
It's democracy at stake. It's our culture at stake. It's our world at stake. And Twitter is amazing and shapes so much. It's not as big as some of the other platforms, but the people of influence use it to set the agenda, and it's just hard to imagine a more important role in the world than to ... I mean, you're doing a brilliant job of listening, Jack, and hearing people, but to actually dial up the urgency and move on this stuff -- will you do that?
民主岌岌可危。 我們的文化和世界岌岌可危。 推特很了不起且有極大的影響力。 或許它不比其他的平臺大, 但有影響力的人用它來排定議程, 很難想像世界上 還有更重要的角色, 我是指,傑克,你在 傾聽這方面做得很出色, 但正視其急迫性 並努力推動改變—— 你願意這麼做嗎?
JD: Yes, and we have been moving substantially. I mean, there's been a few dynamics in Twitter's history. One, when I came back to the company, we were in a pretty dire state in terms of our future, and not just from how people were using the platform, but from a corporate narrative as well. So we had to fix a bunch of the foundation, turn the company around, go through two crazy layoffs, because we just got too big for what we were doing, and we focused all of our energy on this concept of serving the public conversation. And that took some work. And as we dived into that, we realized some of the issues with the fundamentals. We could do a bunch of superficial things to address what you're talking about, but we need the changes to last, and that means going really, really deep and paying attention to what we started 13 years ago and really questioning how the system works and how the framework works and what is needed for the world today, given how quickly everything is moving and how people are using it. So we are working as quickly as we can, but quickness will not get the job done. It's focus, it's prioritization, it's understanding the fundamentals of the network and building a framework that scales and that is resilient to change, and being open about where we are and being transparent about where are so that we can continue to earn trust.
多:會的,我們已取得 一些實質進展。 推特經歷過一些巨變。 當我重回公司時, 我們未來的處境非常危急, 不僅是因為人們 使用這個平臺的方法, 還因為公司結構的問題。 所以我們得改變一些公司基礎, 轉變公司的方向, 歷經兩次瘋狂的裁員, 因為當時做的太多, 我們將工作重心 放在為公眾對話服務上。 這花了一些心力。 當我們開始深入探討時, 我們意識到一些根本性的問題。 為了達成你所說的目的, 可以採取一些表面功夫, 但我們需要可長可久的變革, 而這意味著必須非常深入, 並觀察我們 13 年前建立的東西, 並且反思檢視: 這個系統和框架是怎樣運作的, 現今社會需要的是什麼, 考慮到世界變化之快 和人們使用它的方法。 我們已經快馬加鞭了, 但「快」對完成任務並無幫助。 重要的是:專注、優先順序、 對網路之基礎的理解, 建構可以靈活變動、適應變化的機制。 並對進度開誠布公, 這樣我們才能繼續贏得人們的信任。
So I'm proud of all the frameworks that we've put in place. I'm proud of our direction. We obviously can move faster, but that required just stopping a bunch of stupid stuff we were doing in the past.
我對已經投入執行的機制感到自豪、 我對我們選定的前進方向感到自豪。 顯然我們可以再加緊腳步, 但這包含放棄過去 一連串愚蠢的決定。
CA: All right. Well, I suspect there are many people here who, if given the chance, would love to help you on this change-making agenda you're on, and I don't know if Whitney -- Jack, thank you for coming here and speaking so openly. It took courage. I really appreciate what you said, and good luck with your mission.
安:好的。 我想如果有機會的話,在場很多人 都樂意協助你完成這一項重大改變, 惠特尼是否還有問題—— 傑克,感謝你能來與我們 開誠布公地討論。 這需要勇氣。 我很欣賞你剛所說的, 祝你好運完成任務。
JD: Thank you so much. Thanks for having me.
多:非常感謝。謝謝你們邀請我。
(Applause)
(掌聲)
Thank you.
謝謝。