Chris Anderson: What worries you right now? You've been very open about lots of issues on Twitter. What would be your top worry about where things are right now?
克里斯·安德森: 你现在担心什么? 一直以来你都敞开探讨 推特上的问题。 对于现在的情况, 你最担心的是什么?
Jack Dorsey: Right now, the health of the conversation. So, our purpose is to serve the public conversation, and we have seen a number of attacks on it. We've seen abuse, we've seen harassment, we've seen manipulation, automation, human coordination, misinformation. So these are all dynamics that we were not expecting 13 years ago when we were starting the company. But we do now see them at scale, and what worries me most is just our ability to address it in a systemic way that is scalable, that has a rigorous understanding of how we're taking action, a transparent understanding of how we're taking action and a rigorous appeals process for when we're wrong, because we will be wrong.
杰克·多西: 现在我最担心的是谈话的质量。 我们的目标是促进大众交流, 但我们已经在上面看到了 很多攻击性言论。 我们看到了侮辱,我们看到了骚扰, 我们还看到了诱导性言论, 机器回复、水军、错误信息。 这些变化 是我们在13年前创建公司时 没有想到的。 然而,现在这些言论数量巨大, 我最担心的是我们是否有能力 解决这个问题, 通过一种可延展的系统性方式, 有一个严格的处理标准, 过程公开透明,大众可以理解, 并且在我们犯错的时候 有一个而严格的上诉机制, 因为每个人都会犯错。
Whitney Pennington Rodgers: I'm really glad to hear that that's something that concerns you, because I think there's been a lot written about people who feel they've been abused and harassed on Twitter, and I think no one more so than women and women of color and black women. And there's been data that's come out -- Amnesty International put out a report a few months ago where they showed that a subset of active black female Twitter users were receiving, on average, one in 10 of their tweets were some form of harassment. And so when you think about health for the community on Twitter, I'm interested to hear, "health for everyone," but specifically: How are you looking to make Twitter a safe space for that subset, for women, for women of color and black women?
惠特尼·彭宁顿·罗杰斯: 我非常高兴 听到你谈论你的担心, 因为我觉得有很多人 都觉得自己在推特上被侮辱和骚扰, 并且我觉得首当其冲的 就是女性,非白人女性和黑人女性。 有数据表明, 来自国际特赦组织几个月前的一份报告称 有一部分活跃的黑人女性推特用户 平均每10条推文 就会收到一条骚扰回复。 当你谈论到为推特用户 塑造健康环境的时候, 我对你说的“大众的健康”很感兴趣, 但具体来讲:你准备如何为这个群体、 为女性、为非白人女性和黑人女性 营造一个安全的环境呢?
JD: Yeah. So it's a pretty terrible situation when you're coming to a service that, ideally, you want to learn something about the world, and you spend the majority of your time reporting abuse, receiving abuse, receiving harassment. So what we're looking most deeply at is just the incentives that the platform naturally provides and the service provides. Right now, the dynamic of the system makes it super-easy to harass and to abuse others through the service, and unfortunately, the majority of our system in the past worked entirely based on people reporting harassment and abuse. So about midway last year, we decided that we were going to apply a lot more machine learning, a lot more deep learning to the problem, and try to be a lot more proactive around where abuse is happening, so that we can take the burden off the victim completely. And we've made some progress recently. About 38 percent of abusive tweets are now proactively identified by machine learning algorithms so that people don't actually have to report them. But those that are identified are still reviewed by humans, so we do not take down content or accounts without a human actually reviewing it. But that was from zero percent just a year ago. So that meant, at that zero percent, every single person who received abuse had to actually report it, which was a lot of work for them, a lot of work for us and just ultimately unfair.
多西:好的。 现在状况比较糟糕, 当你来到这个平台, 理想情况下,你想了解这个世界, 但你要花费大量时间来举报 和受到侮辱信息, 还有骚扰信息。 所以我们现在研究的是 平台和服务所提供的动机。 现在,系统的动态性使得在平台上 骚扰和侮辱他人非常容易, 不幸的是,过去我们大部分系统 完全依赖于用户举报骚扰和侮辱信息。 大约在去年年中,我们决定更多采用 机器学习与深度学习, 使其能够主动发现侮辱信息, 这样我们就能让受害者卸下重担。 最近我们颇有成效。 机器自主学习算法 能够主动发现38%的侮辱性推文, 这样人们就不用举报这些了。 但即使机器识别了那些信息, 也还需要人来审查, 所以在有人审查之前, 我们不会删除任何评论或账号。 但重点是我们一年前是从零起步。 我的意思是,以前, 每个受到侮辱的人都需要亲自举报, 对于他们和我们而言是一项大工程, 并且这非常不公平。
The other thing that we're doing is making sure that we, as a company, have representation of all the communities that we're trying to serve. We can't build a business that is successful unless we have a diversity of perspective inside of our walls that actually feel these issues every single day. And that's not just with the team that's doing the work, it's also within our leadership as well. So we need to continue to build empathy for what people are experiencing and give them better tools to act on it and also give our customers a much better and easier approach to handle some of the things that they're seeing. So a lot of what we're doing is around technology, but we're also looking at the incentives on the service: What does Twitter incentivize you to do when you first open it up? And in the past, it's incented a lot of outrage, it's incented a lot of mob behavior, it's incented a lot of group harassment. And we have to look a lot deeper at some of the fundamentals of what the service is doing to make the bigger shifts. We can make a bunch of small shifts around technology, as I just described, but ultimately, we have to look deeply at the dynamics in the network itself, and that's what we're doing.
我们在做的另外一件事, 是确保在公司里 有足够的员工来代表反映 我们服务的所有群体。 若我们的团队没有多元的观念, 若没有员工会每天亲身经历这些问题, 我们将无法成功地建立一个企业。 这一点不仅仅适用于开发小组, 也适用于领导团队。 所以我们要继续在 同情人们的遭遇的同时, 给他们创造更好的交流平台, 并且为我们的用户打造更好而简便的方法 来处理一些问题。 所以我们的大部分工作都是 围绕科技展开的, 但我们同时也在思考这个服务的动机: 当你第一次登录的时候, 推特鼓励你做了什么事? 在过去, 这个平台催生了许多愤怒, 催生了许多暴徒行为, 也催生了许多团体骚扰。 我们正在深入研究我们服务的基本原则, 以探求这种行为现象的原因。 正如我刚才所描述的, 我们可以在科技方面做出许多小的改变, 但最终,我们必须深入研究 网络本身的动态性, 而这就是我们现在所做的。
CA: But what's your sense -- what is the kind of thing that you might be able to change that would actually fundamentally shift behavior?
安德森:但在你看来—— 哪些方面的改动是你力所能及 并且可以从根本上改善人们行为的呢?
JD: Well, one of the things -- we started the service with this concept of following an account, as an example, and I don't believe that's why people actually come to Twitter. I believe Twitter is best as an interest-based network. People come with a particular interest. They have to do a ton of work to find and follow the related accounts around those interests. What we could do instead is allow you to follow an interest, follow a hashtag, follow a trend, follow a community, which gives us the opportunity to show all of the accounts, all the topics, all the moments, all the hashtags that are associated with that particular topic and interest, which really opens up the perspective that you see. But that is a huge fundamental shift to bias the entire network away from just an account bias towards a topics and interest bias.
多西:有一件事—— 我们开设的关注别人账号的这个功能, 我举一个例子, 我猜这不是吸引人们用推特的主要原因。 我相信推特的最佳用途是构建兴趣网络。 人们来到这里,每个人兴趣不同, 他们需要花很大的工夫 才能找到志同道合的人, 关注兴趣相同的账号。 我们可以改进的地方在于 让你关注一项兴趣爱好, 关注一个标签,关注一种趋势, 关注一个群体。 通过这种方式,我们可以给你推荐 与你兴趣有关的 所有账号、所有话题、 所有时刻以及所有标签, 这可以开拓用户的视野。 但要从只关注特定账号、 话题和兴趣爱好, 转而关注整个社交网络, 这是一个非常巨大的转变。
CA: Because isn't it the case that one reason why you have so much content on there is a result of putting millions of people around the world in this kind of gladiatorial contest with each other for followers, for attention? Like, from the point of view of people who just read Twitter, that's not an issue, but for the people who actually create it, everyone's out there saying, "You know, I wish I had a few more 'likes,' followers, retweets." And so they're constantly experimenting, trying to find the path to do that. And what we've all discovered is that the number one path to do that is to be some form of provocative, obnoxious, eloquently obnoxious, like, eloquent insults are a dream on Twitter, where you rapidly pile up -- and it becomes this self-fueling process of driving outrage. How do you defuse that?
安德森:但之所以推特上内容丰富, 是因为 推特使全球很多人 在“角斗场”中相互竞争, 看谁有更多的粉丝,获得更多的关注吗? 我是说,对于那些关注者而言, 这并不是什么大问题。 但对于那些博主而言,每个人都在说, “你知道吗,我希望我也能有更多的赞、 更多的粉丝和转发,” 所以他们不断地尝试, 希望通过某种方式来做到这一点。 并且我们发现,能达成这个目的 最有效方法, 就是发表挑衅的、 讨厌的、词藻华丽而令人憎恶的言论, 比方说,在推特上花式骂人, 如果你不停地侮辱他人—— 最终愤怒的情绪就会 自然不断地蔓延到其他人。 你要怎么解决这个问题?
JD: Yeah, I mean, I think you're spot on, but that goes back to the incentives. Like, one of the choices we made in the early days was we had this number that showed how many people follow you. We decided that number should be big and bold, and anything that's on the page that's big and bold has importance, and those are the things that you want to drive. Was that the right decision at the time? Probably not. If I had to start the service again, I would not emphasize the follower count as much. I would not emphasize the "like" count as much. I don't think I would even create "like" in the first place, because it doesn't actually push what we believe now to be the most important thing, which is healthy contribution back to the network and conversation to the network, participation within conversation, learning something from the conversation. Those are not things that we thought of 13 years ago, and we believe are extremely important right now.
多西:是的,我觉得你讲的正中要害, 但这反过来回到了刚才所说的动机问题。 在早期,我们设计的系统是这样的: 系统会显示关注你账号的人数。 我们决定让这个数字放大又加粗显示, 你界面上放大粗体的字体都很重要, 这些就是你想推动的东西。 对当时而言,这是否是正确的做法呢? 也许不是。 如果让我重头来过, 我不会再这么强调关注人数。 我不会再这么强调推文的“喜爱”数量。 我甚至不觉得我会创造“喜爱”这个功能, 因为这个功能并不能推动 我们眼中最重要的事: 对网络提供健康的贡献, 促进交流, 推动人们参与交流, 在交流中学习。 我们在13年前没有考虑到这些, 但现在我们却认为它们非常重要。
So we have to look at how we display the follower count, how we display retweet count, how we display "likes," and just ask the deep question: Is this really the number that we want people to drive up? Is this the thing that, when you open Twitter, you see, "That's the thing I need to increase?" And I don't believe that's the case right now.
所以我们需要审视系统如何呈现粉丝数, 呈现转推数, 呈现“喜爱”的数量, 并且问这个深奥的问题: 我们想让人们提升的是否就是这个数字? 我们是否希望人们打开推特, 意识到,“这就是我要提升的东西?” 我相信现在答案是否定的。
(Applause)
(掌声)
WPR: I think we should look at some of the tweets that are coming in from the audience as well.
罗杰斯:我觉得我们也应该看看 一部分来自听众的推文。
CA: Let's see what you guys are asking. I mean, this is -- generally, one of the amazing things about Twitter is how you can use it for crowd wisdom, you know, that more knowledge, more questions, more points of view than you can imagine, and sometimes, many of them are really healthy.
安德森:让我们看看你们在问什么。 我觉得,这是——总体而言, 推特的特点之一 在于它能够集中大众智慧, 推特上的知识,问题和观点 多得超乎你的想象。 而且有时候,很多观点意见质量都很高。
WPR: I think one I saw that passed already quickly down here, "What's Twitter's plan to combat foreign meddling in the 2020 US election?" I think that's something that's an issue we're seeing on the internet in general, that we have a lot of malicious automated activity happening. And on Twitter, for example, in fact, we have some work that's come from our friends at Zignal Labs, and maybe we can even see that to give us an example of what exactly I'm talking about, where you have these bots, if you will, or coordinated automated malicious account activity, that is being used to influence things like elections. And in this example we have from Zignal which they've shared with us using the data that they have from Twitter, you actually see that in this case, white represents the humans -- human accounts, each dot is an account. The pinker it is, the more automated the activity is. And you can see how you have a few humans interacting with bots. In this case, it's related to the election in Israel and spreading misinformation about Benny Gantz, and as we know, in the end, that was an election that Netanyahu won by a slim margin, and that may have been in some case influenced by this. And when you think about that happening on Twitter, what are the things that you're doing, specifically, to ensure you don't have misinformation like this spreading in this way, influencing people in ways that could affect democracy?
罗杰斯:刚才我看到的这个很好,在下面, “在2020年美国大选中, 推特准备采取什么措施对抗外国干涉?” 我觉得这是我们在互联网上见到的 非常重要的一个问题, 网络上有着许多恶意的机器操作, 用推特打个比方, 来自齐格努实验室的朋友 给了我们一些资料, 也许我们可以把这当做一个例子, 来引出我想谈论的这个问题, 有很多机器人小号, 或者说一系列的自动化恶意账号活动, 它们被用来影响选举这类的事件。 在这个例子中,齐格努给我们分享了一些 他们从推特上收集到的数据, 你能在这里看到, 白色代表——人类账号, 每个点代表一个账号, 颜色越粉, 说明这个账号活动越像机器人。 你能到看到有一些人与机器账号互动, 在这里,互动内容是关于 以色列选举的, 机器账号散播关于 本尼·甘茨的虚假信息, 我们都知道,选举的结果, 最终内塔尼亚胡以微弱优势获胜, 而这个结果可能被这些互动影响了。 当你想到推特上的这些事情时, 你正在采取具体什么措施, 来保证类似的虚假信息不会传播, 影响他人,以致于影响民主制度呢?
JD: Just to back up a bit, we asked ourselves a question: Can we actually measure the health of a conversation, and what does that mean? And in the same way that you have indicators and we have indicators as humans in terms of are we healthy or not, such as temperature, the flushness of your face, we believe that we could find the indicators of conversational health. And we worked with a lab called Cortico at MIT to propose four starter indicators that we believe we could ultimately measure on the system. And the first one is what we're calling shared attention. It's a measure of how much of the conversation is attentive on the same topic versus disparate. The second one is called shared reality, and this is what percentage of the conversation shares the same facts -- not whether those facts are truthful or not, but are we sharing the same facts as we converse? The third is receptivity: How much of the conversation is receptive or civil or the inverse, toxic? And then the fourth is variety of perspective. So, are we seeing filter bubbles or echo chambers, or are we actually getting a variety of opinions within the conversation? And implicit in all four of these is the understanding that, as they increase, the conversation gets healthier and healthier.
多西:让我们倒回去一点, 我们问了自己一个问题: 我们是否能测量对话的质量? 而对质量的定义又是什么? 就像你我 作为人类会有显示 我们健不健康的指标, 比如说体温,脸上的红晕, 我们相信我们能够找到反应 谈话健康程度的指标。 我们与麻省理工学院的 一个名叫匡提科的实验室合作, 提出了四项指标, 我们认为最终可以运用在系统上。 第一项指标叫作共同关注。 这个指标用于测量对话中 人们多大程度集中于 同一个话题,还是不同的话题。 第二个指标叫作共同现实。 这个指标用于测量这段对话有多少部分 是基于共同事实的—— 不是指这些事实真实与否, 而是我们认为的事实是否相同? 第三个指标是感受性: 一段对话在多大程度上是温和, 让人容易接受的, 亦或者是完全相反,会使人不快呢? 第四个指标是观点多样性。 我们所接收的信息是否被筛选过, 人云亦云, 又或者是我们能在谈话中 接收到许多不同的观点呢? 而这四个指标背后蕴含意思是 指标越高,谈话的质量就越高。
So our first step is to see if we can measure these online, which we believe we can. We have the most momentum around receptivity. We have a toxicity score, a toxicity model, on our system that can actually measure whether you are likely to walk away from a conversation that you're having on Twitter because you feel it's toxic, with some pretty high degree. We're working to measure the rest, and the next step is, as we build up solutions, to watch how these measurements trend over time and continue to experiment. And our goal is to make sure that these are balanced, because if you increase one, you might decrease another. If you increase variety of perspective, you might actually decrease shared reality.
所以第一步是试验是否能在网上 检测这些指标, 我们内心持肯定的态度。 我们在感受性方面势头较大。 我们在系统中建立了反感分, 一个反感模型, 它能够检测在推特上 你是否会终止某个你正在参与的话题, 因为这个话题让你感到不快, 并且是非常不愉快。 我们正在努力测量其他指标, 而下一步是, 在我们制定解决方案之后, 观察这些指标长期的趋势走向, 并且继续试验。 我们的目标是保证这些指标的平衡, 因为一个指标的提升可能会导致 另一个的降低。 如果你提高观点的多样性, 可能就会导致共同现实值的降低。
CA: Just picking up on some of the questions flooding in here.
安德森:让我们选择屏幕上的 一些问题提问。
JD: Constant questioning.
多西:持续不断地发问。
CA: A lot of people are puzzled why, like, how hard is it to get rid of Nazis from Twitter?
安德森:许多人好奇, 在推特上把纳粹清除出去有多难?
JD: (Laughs)
多西:(笑声)
So we have policies around violent extremist groups, and the majority of our work and our terms of service works on conduct, not content. So we're actually looking for conduct. Conduct being using the service to repeatedly or episodically harass someone, using hateful imagery that might be associated with the KKK or the American Nazi Party. Those are all things that we act on immediately. We're in a situation right now where that term is used fairly loosely, and we just cannot take any one mention of that word accusing someone else as a factual indication that they should be removed from the platform. So a lot of our models are based around, number one: Is this account associated with a violent extremist group? And if so, we can take action. And we have done so on the KKK and the American Nazi Party and others. And number two: Are they using imagery or conduct that would associate them as such as well?
我们对于暴力极端组织制定了规则, 而我们的大部分工作和服务条款 是研究用户行为,而非网站内容。 所以我们实际上在研究行为模式。 比如说,有人利用这个平台 来持续或是间断性地骚扰他人, 利用可憎的意象, 比如可能与3K党有关的意象, 或是与美国纳粹党有关的。 这些都是我们会立即处理的问题。 现在,这些术语被使用的次数相对较多, 用词不严谨, 所以我们不能因为有人提到这些单词, 就指控某人有罪, 并且以此为证据将他们赶出这个平台。 所以我们的许多模型 要检测的第一件事是: 这个账号是否与暴力极端组织 有联系? 如果答案是肯定的, 那我们可以采取措施。 我们对于3K党、美国纳粹党 以及其他组织就采取了措施。 第二个问题:这些账号是否在使用 与上述组织有关的图片, 或是其行为是否与上述组织有关?
CA: How many people do you have working on content moderation to look at this?
安德森:你安排了多少人给账号行为评分, 来检查这些行为?
JD: It varies. We want to be flexible on this, because we want to make sure that we're, number one, building algorithms instead of just hiring massive amounts of people, because we need to make sure that this is scalable, and there are no amount of people that can actually scale this. So this is why we've done so much work around proactive detection of abuse that humans can then review. We want to have a situation where algorithms are constantly scouring every single tweet and bringing the most interesting ones to the top so that humans can bring their judgment to whether we should take action or not, based on our terms of service.
多西:人数不固定。 我们希望能灵活应对这件事, 因为我们想保证,第一, 建立算法而不是雇佣大量的人, 因为我们这项任务是会延展的 雇多少人都会显得不够。 这就是为什么我们要努力建立 能积极检测辱骂信息的系统, 然后让人来审阅这些信息。 我们希望可以做到让 算法能够不断检测所有的推文, 把其中问题最大的挑出来, 这样人就可以决定到底要不要采取措施, 基于我们的服务条款。
WPR: But there's not an amount of people that are scalable, but how many people do you currently have monitoring these accounts, and how do you figure out what's enough?
罗杰斯:你说从事这项工作的人数 永远不够, 那么你现在到底雇佣了多少人 来监控账号? 你怎么知道这些人足够完成任务?
JD: They're completely flexible. Sometimes we associate folks with spam. Sometimes we associate folks with abuse and harassment. We're going to make sure that we have flexibility in our people so that we can direct them at what is most needed. Sometimes, the elections. We've had a string of elections in Mexico, one coming up in India, obviously, the election last year, the midterm election, so we just want to be flexible with our resources. So when people -- just as an example, if you go to our current terms of service and you bring the page up, and you're wondering about abuse and harassment that you just received and whether it was against our terms of service to report it, the first thing you see when you open that page is around intellectual property protection. You scroll down and you get to abuse, harassment and everything else that you might be experiencing.
多西:我们的安排非常灵活。 有时我们让人监控垃圾信息。 有时我们让人监控侮辱和 骚扰信息。 我们要确保我们的人事安排足够灵活, 这样我们就能把他们派往 最有需要的地方。 有时,他们监控选举信息。 我们在墨西哥有一组人, 马上在印度也会成立一个小组。 很明显,去年有选举,中期选举, 所以我们希望确保人员流动性。 当人们—— 举个例子,如果你现在去查 我们的服务条款, 然后你打开页面, 然后你想弄明白你刚刚收到的 侮辱和骚扰, 它们是否违反我们的服务条款, 是否要举报, 你打开页面后看到的第一个东西, 是知识产权保护。 你向下看就能看到侮辱,骚扰有关, 以及其他所有你可能会遭遇的事情。
So I don't know how that happened over the company's history, but we put that above the thing that people want the most information on and to actually act on. And just our ordering shows the world what we believed was important. So we're changing all that. We're ordering it the right way, but we're also simplifying the rules so that they're human-readable so that people can actually understand themselves when something is against our terms and when something is not. And then we're making -- again, our big focus is on removing the burden of work from the victims. So that means push more towards technology, rather than humans doing the work -- that means the humans receiving the abuse and also the humans having to review that work. So we want to make sure that we're not just encouraging more work around something that's super, super negative, and we want to have a good balance between the technology and where humans can actually be creative, which is the judgment of the rules, and not just all the mechanical stuff of finding and reporting them. So that's how we think about it.
尽管我不清楚为什么近几年来 推特上会出现这些问题, 但我们把这些信息列为人们 最需要的信息, 并且正在着手解决这些问题。 这些信息排列的先后顺序 代表了我们对其的重视程度。 所以我们正在改变。 我们以正确的顺序排列信息, 但同时我们也在简化规则, 使其简单易懂, 这样人们就能理解 哪些行为违反了条例,哪些没有。 接下来我们—— 正如我刚才所说,现在工作的重点 是减少受害者的操作压力。 这意味着继续发展科技, 代替真人做这些工作—— 真人会接收侮辱信息, 同时还要亲自举报。 所以我们并不希望 我们一步步 朝一些负面的东西发展, 我们希望 在科技和人类创新中取得平衡, 让人类来审查规则, 而非机械地发现问题,报告问题。 以上就是我们的想法。
CA: I'm curious to dig in more about what you said. I mean, I love that you said you are looking for ways to re-tweak the fundamental design of the system to discourage some of the reactive behavior, and perhaps -- to use Tristan Harris-type language -- engage people's more reflective thinking. How far advanced is that? What would alternatives to that "like" button be?
安德森:我想进一步了解你刚刚讲的事。 我的意思是,我很开心听到你说 你在寻求方法 来调整重构系统设计, 来阻止一些反应性行为,并且或许—— 引用特里斯坦·哈里斯的说法—— 鼓励人们反思。 你现在进行到哪一步? 你准备用什么功能 来代替“喜欢”功能?
JD: Well, first and foremost, my personal goal with the service is that I believe fundamentally that public conversation is critical. There are existential problems facing the world that are facing the entire world, not any one particular nation-state, that global public conversation benefits. And that is one of the unique dynamics of Twitter, that it is completely open, it is completely public, it is completely fluid, and anyone can see any other conversation and participate in it. So there are conversations like climate change. There are conversations like the displacement in the work through artificial intelligence. There are conversations like economic disparity. No matter what any one nation-state does, they will not be able to solve the problem alone. It takes coordination around the world, and that's where I think Twitter can play a part.
多西:首先, 我对于推特的个人目标来自于 我认为公众交流是最关键的。 世界上有一些问题, 不仅仅某个特定国家地区, 全世界都面临这些问题, 而全球公共交流 有助于解决这些问题。 这就是推特独特的活力之一, 这个平台完全开放, 完全公共, 完全流动, 任何人可以浏览和参与和人话题。 所以会有气候变化的讨论。 有对人工智能取代人类的讨论。 有对经济差距的讨论。 无论你来自哪个国家或地区, 人类都无法独自解决这些问题。 这需要全球合作, 我们认为这就是推特发挥作用的地方。
The second thing is that Twitter, right now, when you go to it, you don't necessarily walk away feeling like you learned something. Some people do. Some people have a very, very rich network, a very rich community that they learn from every single day. But it takes a lot of work and a lot of time to build up to that. So we want to get people to those topics and those interests much, much faster and make sure that they're finding something that, no matter how much time they spend on Twitter -- and I don't want to maximize the time on Twitter, I want to maximize what they actually take away from it and what they learn from it, and --
第二,现在你登录推特, 当你离开的时候, 也许你觉得没什么收获, 但有些人觉得有收获。 有些人的社交网络非常丰富, 他们身处一个庞大的社区, 每天都可以学习到新的知识。 但要想建造这样的社区, 需要大量的精力和时间。 所以我们让人们可以更快得 找到那些主题、那些兴趣。 并确保他们有所收获, 无论他们每天在推特上花多少时间—— 我不想扩大用户使用推特的时间, 我想扩大用户在推特上的收获, 让他们在推特上学到更多知识,并且——
CA: Well, do you, though? Because that's the core question that a lot of people want to know. Surely, Jack, you're constrained, to a huge extent, by the fact that you're a public company, you've got investors pressing on you, the number one way you make your money is from advertising -- that depends on user engagement. Are you willing to sacrifice user time, if need be, to go for a more reflective conversation?
安德森:你是这么想的吗? 这是许多人想问的一个核心问题。 当然,杰克,你在很大程度上受到约束, 因为这是一家上市公司, 你会收到投资者的压力, 你赚钱的方法之一是广告—— 而广告的数量取决于用户参与度。 如果有需要的话, 你是否愿意牺牲用户参与度, 来换取更加有效的交流呢?
JD: Yeah; more relevance means less time on the service, and that's perfectly fine, because we want to make sure that, like, you're coming to Twitter, and you see something immediately that you learn from and that you push. We can still serve an ad against that. That doesn't mean you need to spend any more time to see more. The second thing we're looking at --
多西:当然;高效交流意味着 用户使用时间的减少, 我完全可以接受这个, 因为我们希望确保,当你来到推特, 你马上就能看到一些你感兴趣, 想学的东西。 在这个基础上,我们依然能投放广告。 这并不意味着你需要加快速度 看更多的东西。 第二件我们正在研究的事情是...
CA: But just -- on that goal, daily active usage, if you're measuring that, that doesn't necessarily mean things that people value every day. It may well mean things that people are drawn to like a moth to the flame, every day. We are addicted, because we see something that pisses us off, so we go in and add fuel to the fire, and the daily active usage goes up, and there's more ad revenue there, but we all get angrier with each other. How do you define ... "Daily active usage" seems like a really dangerous term to be optimizing.
安德森:但是,就你的目标而言, 如果每日活跃参与度 是你追求的东西的话, 这个指数并不意味着人们 每天参与了兴趣相关的事务。 它可能指 人们每天出于好奇围观的事务。 我们之所以沉迷于此,因为我们 看到一些惹怒我们的东西, 然后我们参与进去,与别人争论, 然后每日活跃参与度就上升了, 因此广告收益也增多了, 但最终结果是我们愈发互相仇恨。 你是如何定义... 过于美化“每日活跃参与度”这个数值 似乎是一件危险的事。
(Applause)
(掌声)
JD: Taken alone, it is, but you didn't let me finish the other metric, which is, we're watching for conversations and conversation chains. So we want to incentivize healthy contribution back to the network, and what we believe that is is actually participating in conversation that is healthy, as defined by those four indicators I articulated earlier.
多西:确实,如果你单独 讨论这个数值的话, 但你得让我介绍其他衡量标准, 我们同时也在检测谈话 和谈话链。 我们希望鼓励平台发展健康交流, 而我们认为健康交流的定义是 参与有益的交流, 这里的有益通过我刚刚所 提到的四个指标来测量。
So you can't just optimize around one metric. You have to balance and look constantly at what is actually going to create a healthy contribution to the network and a healthy experience for people. Ultimately, we want to get to a metric where people can tell us, "Hey, I learned something from Twitter, and I'm walking away with something valuable." That is our goal ultimately over time, but that's going to take some time.
所以你不能只看一个指标。 你得平衡一下,不断观察 在平台上什么因素可以创造健康交流, 可以为人们创造健康的体验。 最终,我们希望可以达到这样的标准, 人们会说:“嘿,我从推特上 学到了一些东西, 它让我增长了见识。” 这是我们未来的最终目标, 但距离这个目标的实现还有一段时间。
CA: You come over to many, I think to me, as this enigma. This is possibly unfair, but I woke up the other night with this picture of how I found I was thinking about you and the situation, that we're on this great voyage with you on this ship called the "Twittanic" --
安德森:你在很多人看来,是一个谜。 我也这样觉得。 这可能不公平, 但当我之前某天醒来的时候, 脑海中的画面是你和这个现状, 我们正乘坐在一艘名为“推坦尼克号” 的巨轮上前进...
(Laughter)
(笑声)
and there are people on board in steerage who are expressing discomfort, and you, unlike many other captains, are saying, "Well, tell me, talk to me, listen to me, I want to hear." And they talk to you, and they say, "We're worried about the iceberg ahead." And you go, "You know, that is a powerful point, and our ship, frankly, hasn't been built properly for steering as well as it might." And we say, "Please do something." And you go to the bridge, and we're waiting, and we look, and then you're showing this extraordinary calm, but we're all standing outside, saying, "Jack, turn the fucking wheel!"
在船上,普通舱的一些人 感觉非常不舒服, 而你,和其他船长不同, 说:“嘿,告诉我, 我想知道你们的意见。” 所以人们过来跟你讲话,他们说: “我们担心前方的冰山。” 然后你回答:“你们知道吗, 那是一个巨大的挑战, 说实话,我们的船可能不像 它看上去的那样 可以轻松转向。” 然后人们说:“请想想办法。” 然后你登上舰桥, 我们就在等待。 我们向外看,发现你神态自若, 但我们都站在外面, 喊着:“杰克,求你快点转向!”
You know?
你能懂吗?
(Laughter)
(笑声)
(Applause)
(掌声)
I mean --
我是说
(Applause)
(掌声)
It's democracy at stake. It's our culture at stake. It's our world at stake. And Twitter is amazing and shapes so much. It's not as big as some of the other platforms, but the people of influence use it to set the agenda, and it's just hard to imagine a more important role in the world than to ... I mean, you're doing a brilliant job of listening, Jack, and hearing people, but to actually dial up the urgency and move on this stuff -- will you do that?
民主制度危在旦夕。 我们的文化、我们的世界危在旦夕。 推特非常了不起,变化非常巨大。 它也许没有另一些平台那么大, 但有影响力的人用它来设定世界议程, 很难想象世界上比它更重要的角色。 我的意思是,杰克,你现在 听取他人意见,你做得很好, 但讲到真正专心推动解决的办法—— 你会这么做吗?
JD: Yes, and we have been moving substantially. I mean, there's been a few dynamics in Twitter's history. One, when I came back to the company, we were in a pretty dire state in terms of our future, and not just from how people were using the platform, but from a corporate narrative as well. So we had to fix a bunch of the foundation, turn the company around, go through two crazy layoffs, because we just got too big for what we were doing, and we focused all of our energy on this concept of serving the public conversation. And that took some work. And as we dived into that, we realized some of the issues with the fundamentals. We could do a bunch of superficial things to address what you're talking about, but we need the changes to last, and that means going really, really deep and paying attention to what we started 13 years ago and really questioning how the system works and how the framework works and what is needed for the world today, given how quickly everything is moving and how people are using it. So we are working as quickly as we can, but quickness will not get the job done. It's focus, it's prioritization, it's understanding the fundamentals of the network and building a framework that scales and that is resilient to change, and being open about where we are and being transparent about where are so that we can continue to earn trust.
多西:能,我们取得了 一些实质性的进展。 我的意思是,在推特的历史上 有着一些巨大变化。 当我重回到公司时, 我们对未来情形一筹莫展, 不仅仅是因为人们使用这个平台的方法, 还因为公司结构的问题。 所以我们得改变一些公司基础, 转变公司的方向, 经历两次疯狂的裁员, 因为对于当时的业务而言雇员太多, 然后我们将工作重心 放在为公众交流服务上。 这花了一点时间。 当我们开始关注这方面时, 我们意识到一些原则性的问题。 为了达成你所说的目的, 我们可以采取一系列浅层措施, 但我们需要找到可持续的改进措施, 而这意味着非常深入, 观察我们13年前建立的东西, 并且反省 这个系统是怎样工作的, 这个框架是怎样工作的, 现今社会需要的是什么, 考虑到世界变化之快和 人们使用它的方法。 所以我们已经尽快努力了, 但速度无法帮我们完成任务。 重要的是专注、优先级、 是对网络之基的理解, 构建可以灵活变动、适应变化的机制。 坦诚布公我们的现状, 这样我们才能继续赢得人们的信任。
So I'm proud of all the frameworks that we've put in place. I'm proud of our direction. We obviously can move faster, but that required just stopping a bunch of stupid stuff we were doing in the past.
所以我对已经投入执行的机制感到自豪。 我对我们选定的前进方向感到自豪。 显然,我们可以更快进步, 但这需要我们放弃过去一系列愚蠢的决定。
CA: All right. Well, I suspect there are many people here who, if given the chance, would love to help you on this change-making agenda you're on, and I don't know if Whitney -- Jack, thank you for coming here and speaking so openly. It took courage. I really appreciate what you said, and good luck with your mission.
安德森:很好。 那么,我觉得如果有机会的话, 在场的很多人 都愿意帮助你完场这一项重大改变, 我不知道惠特尼是否还有问题, 杰克,感谢你能来与我们 开诚布公地讨论。 这非常需要勇气。 我对你所说的非常感激, 祝你好运完成任务。
JD: Thank you so much. Thanks for having me.
多西:非常感谢。谢谢你们邀请我。
(Applause)
(掌声)
Thank you.
谢谢。