Chris Anderson: Nick Bostrom. So, you have already given us so many crazy ideas out there. I think a couple of decades ago, you made the case that we might all be living in a simulation, or perhaps probably were. More recently, you've painted the most vivid examples of how artificial general intelligence could go horribly wrong. And now this year, you're about to publish a paper that presents something called the vulnerable world hypothesis. And our job this evening is to give the illustrated guide to that. So let's do that. What is that hypothesis?
克里斯 · 安德森(CA): 尼克 · 博斯特罗姆(NB), 你为我们带来过这么多 疯狂的点子。 我记得大概二十年前, 你曾提出 我们可能都正在,或曾经 生活在一个模拟世界里。 最近, 你用最生动的案例来说明 通用人工智能 可能会带来的非常可怕的问题。 接着今年, 你即将发表一篇论文, 题目为《脆弱世界假说》。 而今天晚上我们的任务 就是来简析这篇文章。 来我们开始吧。 假设前提是什么呢?
Nick Bostrom: It's trying to think about a sort of structural feature of the current human condition. You like the urn metaphor, so I'm going to use that to explain it. So picture a big urn filled with balls representing ideas, methods, possible technologies. You can think of the history of human creativity as the process of reaching into this urn and pulling out one ball after another, and the net effect so far has been hugely beneficial, right? We've extracted a great many white balls, some various shades of gray, mixed blessings. We haven't so far pulled out the black ball -- a technology that invariably destroys the civilization that discovers it. So the paper tries to think about what could such a black ball be.
尼克 · 博斯特罗姆(NB): 这篇文章在试图讨论 人类现状的一种结构化特征。 我知道你喜欢摸球游戏的比喻, 那我就用它来解释。 想象一个很大的缸中装满了小球, 小球代表 各种观点、方法,和可能的技术。 你可以把人类的创造历史 当作把手伸向这个缸 并从中不断取出小球的过程, 目前为止的净效益 已创造了极大的收益,对吧? 我们取出了很多很多白色的小球, 一些或深或浅的灰色的小球, 有好有坏。 我们至今 还没有拿到过黑色的小球—— 也就是一个必定会 摧毁其所属文明的技术。 这篇论文试图讨论的就是 这个黑球可能会是什么。
CA: So you define that ball as one that would inevitably bring about civilizational destruction.
CA:所以你将这个球 定义为必将摧毁文明的存在。
NB: Unless we exit what I call the semi-anarchic default condition. But sort of, by default.
NB:除非我们停止这种我称之为 半无政府的默认条件, 或者说默认状态。
CA: So, you make the case compelling by showing some sort of counterexamples where you believe that so far we've actually got lucky, that we might have pulled out that death ball without even knowing it. So there's this quote, what's this quote?
CA:也就是说你利用了一些反例 做为有力佐证来说明你的观点, 比如说你相信到目前为止 我们仅仅是走运, 而且我们在不知情的情况下, 很可能已经取出了死亡之球。 我记得有句名言,是什么来着?
NB: Well, I guess it's just meant to illustrate the difficulty of foreseeing what basic discoveries will lead to. We just don't have that capability. Because we have become quite good at pulling out balls, but we don't really have the ability to put the ball back into the urn, right. We can invent, but we can't un-invent. So our strategy, such as it is, is to hope that there is no black ball in the urn.
NB:我猜我只是想阐明 预测基本发现 在未来可能的作用潜力的 困难程度。 我们真的不具备那样的能力。 因为对于取出小球, 我们已经轻车熟路, 但是我们并没有 把球放回缸里的能力。 我们可以发明, 但是我们不会回到发明前。 很显然我们的战略, 就是祈祷缸里没有黑球。
CA: So once it's out, it's out, and you can't put it back in, and you think we've been lucky. So talk through a couple of these examples. You talk about different types of vulnerability.
CA:也就是说一旦把球取出来了 就没法再放回去了。 你觉得我们一直挺幸运。 在讲述这些例子的过程中, 你谈到了不同类型的弱点。
NB: So the easiest type to understand is a technology that just makes it very easy to cause massive amounts of destruction. Synthetic biology might be a fecund source of that kind of black ball, but many other possible things we could -- think of geoengineering, really great, right? We could combat global warming, but you don't want it to get too easy either, you don't want any random person and his grandmother to have the ability to radically alter the earth's climate. Or maybe lethal autonomous drones, massed-produced, mosquito-sized killer bot swarms. Nanotechnology, artificial general intelligence.
NB:最容易理解的一种类型是 不费吹灰之力 便能造成大规模破坏的技术。 合成生物技术可能成为 这样的黑球, 但同时我们可以利用它 做很多事情—— 比如说地球工程, 非常棒,对吧? 我们可以利用它来应对全球变暖, 但是我们肯定不希望 这种技术变得唾手可得, 我们不希望某个路人和他的奶奶 拥有这样的能彻底 改变地球气候的能力。 又或者是具备杀伤力的 自主无人机, 得以量产、如蚊子大小的 机械杀人虫。 纳米技术,通用人工智能等。
CA: You argue in the paper that it's a matter of luck that when we discovered that nuclear power could create a bomb, it might have been the case that you could have created a bomb with much easier resources, accessible to anyone.
CA:在论文里,你论述到 我们对能够创造原子弹的 核能的发现 纯粹是走运。 因此,利用身边更容易获得的资源 就能制造出原子弹这样的情况, 也是极有可能的。
NB: Yeah, so think back to the 1930s where for the first time we make some breakthroughs in nuclear physics, some genius figures out that it's possible to create a nuclear chain reaction and then realizes that this could lead to the bomb. And we do some more work, it turns out that what you require to make a nuclear bomb is highly enriched uranium or plutonium, which are very difficult materials to get. You need ultracentrifuges, you need reactors, like, massive amounts of energy. But suppose it had turned out instead there had been an easy way to unlock the energy of the atom. That maybe by baking sand in the microwave oven or something like that you could have created a nuclear detonation. So we know that that's physically impossible. But before you did the relevant physics how could you have known how it would turn out?
NB:没错,回顾上世纪 30 年代, 当时我们在核物理学领域 取得首次突破, 几个天才科学家发现 引发原子核链反应是可能的, 接着便想到把它做成一枚炸弹。 我们往下研究, 发现要想做出一枚原子弹 需要浓缩铀或者钚, 而这些是极难弄到手的材料。 需要超速离心机, 需要反应堆, 能产生巨大能量的东西。 但是试想一下, 如果有一种很简单的方法 就能释放原子的能量。 可能是在微波炉里烤沙子 诸如此类的简便方法, 你就能很轻而易举地 引发原子核爆。 我们知道从物理学上来说 这并不可能。 但是在研究相关的物理学问题之前, 我们又如何知道这是不可能的呢?
CA: Although, couldn't you argue that for life to evolve on Earth that implied sort of stable environment, that if it was possible to create massive nuclear reactions relatively easy, the Earth would never have been stable, that we wouldn't be here at all.
CA:尽管如此,为何你不能论述说 地球上生命的进化 暗示了地球某种稳定的环境, 如果相对简单地 就能引发大规模核反应, 那么地球本就不会如此稳定, 我们也不复存在。
NB: Yeah, unless there were something that is easy to do on purpose but that wouldn't happen by random chance. So, like things we can easily do, we can stack 10 blocks on top of one another, but in nature, you're not going to find, like, a stack of 10 blocks.
NB:是的,除非这种情况能轻易实现 且有人有意为之。 但这种情况不会突然冒出来。 我们轻易能办到的, 比如说将 10 个方块 一个个叠起来, 但在自然中,你不会看到 一个 10 个方块为一摞的物体。
CA: OK, so this is probably the one that many of us worry about most, and yes, synthetic biology is perhaps the quickest route that we can foresee in our near future to get us here.
CA:那么这可能就是 我们很多人最为担心的一点, 也就是合成生物技术 很可能就是引领我们走向 可视未来的捷径。
NB: Yeah, and so think about what that would have meant if, say, anybody by working in their kitchen for an afternoon could destroy a city. It's hard to see how modern civilization as we know it could have survived that. Because in any population of a million people, there will always be some who would, for whatever reason, choose to use that destructive power. So if that apocalyptic residual would choose to destroy a city, or worse, then cities would get destroyed.
NB:是的,现在我们想想 那可能意味着什么? 假设任何人在某个下午, 在自己的厨房里捣腾 就能毁掉一座城市。 很难想象我们熟知的现代文明 怎样能逃过一劫。 因为任意 100 万人里面, 总会有那么一些人, 出于某种原因, 会选择使用那种毁天灭地的能量。 如果那个世界末日的残余 选择要摧毁一座城市, 甚至更糟, 那么文明将难逃一劫。
CA: So here's another type of vulnerability. Talk about this.
CA:文章中提到的另一种弱点。 你能谈谈吗?
NB: Yeah, so in addition to these kind of obvious types of black balls that would just make it possible to blow up a lot of things, other types would act by creating bad incentives for humans to do things that are harmful. So, the Type-2a, we might call it that, is to think about some technology that incentivizes great powers to use their massive amounts of force to create destruction. So, nuclear weapons were actually very close to this, right? What we did, we spent over 10 trillion dollars to build 70,000 nuclear warheads and put them on hair-trigger alert. And there were several times during the Cold War we almost blew each other up. It's not because a lot of people felt this would be a great idea, let's all spend 10 trillion dollars to blow ourselves up, but the incentives were such that we were finding ourselves -- this could have been worse. Imagine if there had been a safe first strike. Then it might have been very tricky, in a crisis situation, to refrain from launching all their nuclear missiles. If nothing else, because you would fear that the other side might do it.
NB:嗯,在这种显而易见的黑球外, 这种能一下子引发大爆炸的黑球外, 其他类型的黑球 则是通过创造不良动机, 鼓吹人们作恶。 我们可以把它称为“ Type-2a ”, 指的是 使用某些技术去怂恿大国 让它们利用自身的大规模武力 带来破坏。 那么核武器 其实很接近这个定义,对吧? 我们花了超过 10 万亿美元 生产出 7 万颗原子弹, 随时候命。 在冷战时期,有好几次 我们差一点就把世界炸得飞起。 这并不是因为人们认为 炸飞彼此是一个很棒的想法, 我们一起花个 10 万亿 把大家炸个稀巴烂吧, 但是背后的推动力是如此强大, 我们发现—— 结果本可能会更惨重。 假设第一次突袭安然无恙。 那么之后,在一个危急情况下 想要遏制别人发射所有的原子弹 将会变得非常棘手。 不为别的,因为你会害怕 对手抢先一步发射导弹。
CA: Right, mutual assured destruction kept the Cold War relatively stable, without that, we might not be here now.
CA:对,相互制衡的毁灭性武器 让冷战时期保持相对稳定, 如果没有这种制衡, 我们可能也不复存在了。
NB: It could have been more unstable than it was. And there could be other properties of technology. It could have been harder to have arms treaties, if instead of nuclear weapons there had been some smaller thing or something less distinctive.
NB:冷战本可能比当时的局势更动荡。 因为可能会有其他技术武器。 假设不是核武器震慑, 而是其他更小型的 或是不那么与众不同的武器, 军备条约怕是更难达成共识。
CA: And as well as bad incentives for powerful actors, you also worry about bad incentives for all of us, in Type-2b here.
CA:以及鼓吹强国政治力量的不良动机, 你在 Type-2b 中也表示了 对怂恿常人的不良动机的担忧。
NB: Yeah, so, here we might take the case of global warming. There are a lot of little conveniences that cause each one of us to do things that individually have no significant effect, right? But if billions of people do it, cumulatively, it has a damaging effect. Now, global warming could have been a lot worse than it is. So we have the climate sensitivity parameter, right. It's a parameter that says how much warmer does it get if you emit a certain amount of greenhouse gases. But, suppose that it had been the case that with the amount of greenhouse gases we emitted, instead of the temperature rising by, say, between three and 4.5 degrees by 2100, suppose it had been 15 degrees or 20 degrees. Like, then we might have been in a very bad situation. Or suppose that renewable energy had just been a lot harder to do. Or that there had been more fossil fuels in the ground.
NB:是的, 在这里我们可能要拿全球变暖举例。 总有那么一些便利因素 促使我们每一个人做出一些 无足轻重的个人行为,对吧? 但是如果上百亿人都这么做, 累积起来,这将会产生毁灭性影响。 全球变暖的情况本可能 比现在更糟糕。 我们现在用的是气候敏感参数。 这个参数显示 我们每释放一个单位的温室气体 气温会上升多少。 但是假设情况变成 我们每释放一个单位的温室气体, 到 2100 年 气温不是上升 3 - 4.5 度, 而是 15 - 20 度。 那我们将可能处于水深火热之中。 又或者假设很难生产再生能源。 又或者地下还有很多石油。
CA: Couldn't you argue that if in that case of -- if what we are doing today had resulted in 10 degrees difference in the time period that we could see, actually humanity would have got off its ass and done something about it. We're stupid, but we're not maybe that stupid. Or maybe we are.
CA:在这种情况下, 那你为什么不论述—— 我们现在的行为 在可见的一段时间里 已经造成了 10 度的气温之差, 那这下人们总该 认真起来,做出些改变了。 我们愚蠢,但是我们可能 还不至于那么愚蠢。 又或者我们就是如此愚蠢。
NB: I wouldn't bet on it.
NB:我看不一定。
(Laughter)
(笑声)
You could imagine other features. So, right now, it's a little bit difficult to switch to renewables and stuff, right, but it can be done. But it might just have been, with slightly different physics, it could have been much more expensive to do these things.
你可以想象其他东西。 比如现在要转向再生能源之类的 的确有些困难,对吧, 但是确是可行的。 可能在稍有不同的物理学知识原理下, 它的应用可能就会变得昂贵很多。
CA: And what's your view, Nick? Do you think, putting these possibilities together, that this earth, humanity that we are, we count as a vulnerable world? That there is a death ball in our future?
CA:那你怎么看呢,尼克? 你认为把这些可能性加总, 这个地球,我们人类, 就是一个脆弱的世界? 我们的未来里有一个死亡之球?
NB: It's hard to say. I mean, I think there might well be various black balls in the urn, that's what it looks like. There might also be some golden balls that would help us protect against black balls. And I don't know which order they will come out.
NB:这很难说。 我的意思是,我认为缸里 可能有各种各样的黑球, 可能这就是我们的未来。 但其中,可能也会有一些金球, 能保护我们不受黑球的伤害。 可我不知道 它们出现的先后顺序。
CA: I mean, one possible philosophical critique of this idea is that it implies a view that the future is essentially settled. That there either is that ball there or it's not. And in a way, that's not a view of the future that I want to believe. I want to believe that the future is undetermined, that our decisions today will determine what kind of balls we pull out of that urn.
CA:对这种想法的 一种可能的哲学批判是 这种想法暗示着未来的大局已定。 这个球要么存在,要么不存在。 某种程度上, 这不是我想要的未来观。 我想要相信未来尚未成型, 今天我们的决定将能改变未来走向 也就是我们会从这个缸里 取出怎样的小球。
NB: I mean, if we just keep inventing, like, eventually we will pull out all the balls. I mean, I think there's a kind of weak form of technological determinism that is quite plausible, like, you're unlikely to encounter a society that uses flint axes and jet planes. But you can almost think of a technology as a set of affordances. So technology is the thing that enables us to do various things and achieve various effects in the world. How we'd then use that, of course depends on human choice. But if we think about these three types of vulnerability, they make quite weak assumptions about how we would choose to use them. So a Type-1 vulnerability, again, this massive, destructive power, it's a fairly weak assumption to think that in a population of millions of people there would be some that would choose to use it destructively.
NB:我想如果 我们一直不断发明创造, 最终我们会取出所有的小球。 那是一种技术决定论的较弱形式, 也是挺合理的。 就好像你不可能遇上一个 燧石斧和喷射机并存的社会。 但是你可以把某种技术想象成 一种功能组合。 也就是说技术能让我们 实现各种各样的事情, 并在这个世界中 造成各种各样的影响。 我们怎样利用技术 全凭人类做出怎样的选择。 但是如果我们想想这三种类型的弱点, 它们做出了一个关于我们 将会选择如何使用技术的弱势假设。 比如说 Type-1 弱点, 也就是大规模毁灭性力量, 思考上百万的人口中, 会有一些人 可能会选择使用这种力量作恶, 这是一个相当弱的假设。
CA: For me, the most single disturbing argument is that we actually might have some kind of view into the urn that makes it actually very likely that we're doomed. Namely, if you believe in accelerating power, that technology inherently accelerates, that we build the tools that make us more powerful, then at some point you get to a stage where a single individual can take us all down, and then it looks like we're screwed. Isn't that argument quite alarming?
CA:对于我来说, 其中最令人心烦的论点 就是我们可能对于 缸里的东西已经有了一定的想法, 这让我们感觉—— 很可能我们要完了。 换句话说, 如果你相信加速的力量, 也就是技术固有的发展加速性, 我们所打造的这些 让我们更强大的工具, 随后到了某个时刻, 我们会进入 某个人仅凭一己之力 就能干掉所有人的境地, 之后这看起来似乎我们都要完了。 这个论点不是有点令人恐慌?
NB: Ah, yeah.
NB:呃,对啊。
(Laughter)
(笑声)
I think -- Yeah, we get more and more power, and [it's] easier and easier to use those powers, but we can also invent technologies that kind of help us control how people use those powers.
我认为—— 我们获得越来越多的力量, 这些力量用起来越来越得心应手, 但是同时我们也可以发明一些技术 来帮我们控制 人们如何使用这些力量。
CA: So let's talk about that, let's talk about the response. Suppose that thinking about all the possibilities that are out there now -- it's not just synbio, it's things like cyberwarfare, artificial intelligence, etc., etc. -- that there might be serious doom in our future. What are the possible responses? And you've talked about four possible responses as well.
CA:那我们来谈谈面对这些危机的反应。 假设想想这些可能性 现有的这些可能性—— 不单单是合成生物技术, 还是网络战, 人工智能,等等等等—— 我们的未来可能注定难逃一切。 那人们可能会有怎样的反应呢? 你在文章中 也谈过四种可能的反应。
NB: Restricting technological development doesn't seem promising, if we are talking about a general halt to technological progress. I think neither feasible, nor would it be desirable even if we could do it. I think there might be very limited areas where maybe you would want slower technological progress. You don't, I think, want faster progress in bioweapons, or in, say, isotope separation, that would make it easier to create nukes.
NB:限制技术发展看起来不太可能, 如果我们谈的是技术发展的全面停滞。 我觉得非但不可行, 而且哪怕我们有能力这么做, 又有谁会想这么做。 我想在极少数的领域, 你可能会想要放慢发展的步伐。 比如说你不会希望 生化武器得到快速发展, 又或者说同位素分离, 那是有助于生产原子弹的东西。
CA: I mean, I used to be fully on board with that. But I would like to actually push back on that for a minute. Just because, first of all, if you look at the history of the last couple of decades, you know, it's always been push forward at full speed, it's OK, that's our only choice. But if you look at globalization and the rapid acceleration of that, if you look at the strategy of "move fast and break things" and what happened with that, and then you look at the potential for synthetic biology, I don't know that we should move forward rapidly or without any kind of restriction to a world where you could have a DNA printer in every home and high school lab. There are some restrictions, right?
CA:换作以前, 我会十分同意你这里的观点。 但是现在, 我倒是想要三思一下。 因为首先, 如果我们回顾 过去几十年的历史, 这段时期, 我们一直都在全速前进, 这还好,那是我们唯一的选择。 但如果你看看 全球化及其的加速发展, 如果看看那个 “快速行动,破除陈规”的策略 及其带来的后果, 再看看合成生物技术的潜力所在, 我不认为 我们应该快速发展, 或者不加以任何约束, 奔向一个人手一台 DNA 打印机, DNA 打印机变成高中实验室标配的 世界。 是有一些的约束,对吗?
NB: Possibly, there is the first part, the not feasible. If you think it would be desirable to stop it, there's the problem of feasibility. So it doesn't really help if one nation kind of --
NB:有可能,这是第一部分, 但是并不可行。 如果你觉得 阻止这一切是人心所向, 那么还有可行性问题。 如果只是一个国家这么做 并没多大用处——
CA: No, it doesn't help if one nation does, but we've had treaties before. That's really how we survived the nuclear threat, was by going out there and going through the painful process of negotiating. I just wonder whether the logic isn't that we, as a matter of global priority, we shouldn't go out there and try, like, now start negotiating really strict rules on where synthetic bioresearch is done, that it's not something that you want to democratize, no?
CA:不,一个国家不管用, 但是我们之前有过国际条约。 这才是我们 真正度过核危机的方法—— 走出国门, 历经痛苦的斡旋谈判。 我在想,作为一项全球优先事务, 这其中的逻辑 不应该是我们走出国门并尝试, 比如现在,开始协商并制定出 一些非常严格的规定 来约束合成生物的研究吗? 这可不是什么 你想要民主化的东西,对吧?
NB: I totally agree with that -- that it would be desirable, for example, maybe to have DNA synthesis machines, not as a product where each lab has their own device, but maybe as a service. Maybe there could be four or five places in the world where you send in your digital blueprint and the DNA comes back, right? And then, you would have the ability, if one day it really looked like it was necessary, we would have like, a finite set of choke points. So I think you want to look for kind of special opportunities, where you could have tighter control.
NB:我完全赞成—— 举个例子, 也许拥有 DNA 合成仪器 挺诱人的, 并非作为每个实验室 都有的那种产品, 而可能作为一种服务。 也许世界上能有那么 4 - 5 个地方, 你可以把自己的电子蓝图发过去, 然后得到自己的 DNA 图谱。 如果这个技术变成必需的一天 真的来了, 那你就会获得这种能力, 我们也会有一组有限的要塞点。 所以我认为你会想 寻找某些特殊的机会, 使你有更强的控制。
CA: Your belief is, fundamentally, we are not going to be successful in just holding back. Someone, somewhere -- North Korea, you know -- someone is going to go there and discover this knowledge, if it's there to be found.
CA:你的观点是,最终, 仅依靠放慢脚步, 我们是无法成功的。 某人,在某个地方—— 北朝鲜—— 某人会发现这样的知识, 如果它真的存在的话。
NB: That looks plausible under current conditions. It's not just synthetic biology, either. I mean, any kind of profound, new change in the world could turn out to be a black ball.
NB:在当前的情况来看貌似是合理的。 也不仅限于合成生物技术。 我想世界上任何新的前沿的改变, 都可能是一个黑球。
CA: Let's look at another possible response.
CA:我们再看看另一种可能的反应。
NB: This also, I think, has only limited potential. So, with the Type-1 vulnerability again, I mean, if you could reduce the number of people who are incentivized to destroy the world, if only they could get access and the means, that would be good.
NB:我想这种反应的潜能是有限的。 我们再看看 Type-1 弱点, 如果能减少那些有动力因素 去毁灭世界的人数, 如果有这样的方法和途径, 那会是件好事。
CA: In this image that you asked us to do you're imagining these drones flying around the world with facial recognition. When they spot someone showing signs of sociopathic behavior, they shower them with love, they fix them.
CA:在这个你希望我们想象的图景中, 你想象着带着面部识别的无人机 满世界飞。 当它们发现有人表现出 反社会行为倾向时, 它们会施以爱的沐浴,并修正他们。
NB: I think it's like a hybrid picture. Eliminate can either mean, like, incarcerate or kill, or it can mean persuade them to a better view of the world. But the point is that, suppose you were extremely successful in this, and you reduced the number of such individuals by half. And if you want to do it by persuasion, you are competing against all other powerful forces that are trying to persuade people, parties, religion, education system. But suppose you could reduce it by half, I don't think the risk would be reduced by half. Maybe by five or 10 percent.
NB:我想这会是个混合的图景。 消除可以表示禁闭或杀害, 也可以意味着, 劝说人们看到世界更美好的一面。 但是重点是, 假设你深谙此道, 你把害人之马的数目减半了。 而且假如你想通过劝说来实现, 那么你就是在和那些 如政党,宗教,教育体系等 试图游说人们的进行力量比拼。 但是假设你 真能把害人之马的数目减半, 我不认为风险就会相应地 被削弱一半。 可能只是减少 5% - 10% 。
CA: You're not recommending that we gamble humanity's future on response two.
CA:你并不推荐我们把人类的未来 押在第二种反应上。
NB: I think it's all good to try to deter and persuade people, but we shouldn't rely on that as our only safeguard.
NB:我觉得阻止和劝服人们是好的, 但是我们不应该将此做为 我们唯一的保护措施。
CA: How about three?
CA:那么第三种呢?
NB: I think there are two general methods that we could use to achieve the ability to stabilize the world against the whole spectrum of possible vulnerabilities. And we probably would need both. So, one is an extremely effective ability to do preventive policing. Such that you could intercept. If anybody started to do this dangerous thing, you could intercept them in real time, and stop them. So this would require ubiquitous surveillance, everybody would be monitored all the time.
NB:我觉得有两种通用的方法, 我们可以利用它们 来获取稳定世界的能力 来对抗所有一切可能的弱点。 我们很可能会同时需要两者。 一种是能够实施 极为有效的预防管制的能力。 如果有人做出危险之事, 你可以拦截, 你可以进行实时拦截, 并阻止他们。 而这需要无孔不入的监管, 所有人无时无刻都被监视。
CA: This is "Minority Report," essentially, a form of.
CA:那某种程度就是 ”少数派报告“里的情景了。
NB: You would have maybe AI algorithms, big freedom centers that were reviewing this, etc., etc.
NB:你可能会使用人工智能算法 来审查大型自由中心的这些数据等等。
CA: You know that mass surveillance is not a very popular term right now?
CA:你应该知道大众监控 现在不是个很吃香的词吧?
(Laughter)
(笑声)
NB: Yeah, so this little device there, imagine that kind of necklace that you would have to wear at all times with multidirectional cameras. But, to make it go down better, just call it the "freedom tag" or something like that.
NB:是的, 所以图中的一个小小设备, 把它想象成一个 你需要 24 小时佩戴的 上面有多角度摄像头的项链。 为了听起来不那么膈应, 我们就叫它”自由标签“什么的。
(Laughter)
(笑声)
CA: OK. I mean, this is the conversation, friends, this is why this is such a mind-blowing conversation.
CA:好吧。 这就是对话呀,朋友们, 这就是为什么 这是一场令人震撼的对话呀。
NB: Actually, there's a whole big conversation on this on its own, obviously. There are huge problems and risks with that, right? We may come back to that. So the other, the final, the other general stabilization capability is kind of plugging another governance gap. So the surveillance would be kind of governance gap at the microlevel, like, preventing anybody from ever doing something highly illegal. Then, there's a corresponding governance gap at the macro level, at the global level. You would need the ability, reliably, to prevent the worst kinds of global coordination failures, to avoid wars between great powers, arms races, cataclysmic commons problems, in order to deal with the Type-2a vulnerabilities.
NB:事实上,就这个主题 已经有很广泛的讨论了, 很明显,单单针对这个主题本身。 这背后有非常严重的 问题和风险,对吧? 我们待会可以再谈谈。 所以另外一种方法, 也是最后的方法, 另一种带来普遍稳定性的能力 大致就是插入另一个管制间隙。 那么就可以在微观层面进行监管, 比如说防止任何人做出 严重违法的行为。 然后有一个对应的 宏观、全球性层面的监管间隙。 你需要凭借这种能力, 可靠地, 来防止最糟糕的全球性协调失灵, 来避免强权之间的战争, 军备竞赛, 灾难性的常见问题, 以应对 Type-2a 弱点。
CA: Global governance is a term that's definitely way out of fashion right now, but could you make the case that throughout history, the history of humanity is that at every stage of technological power increase, people have reorganized and sort of centralized the power. So, for example, when a roving band of criminals could take over a society, the response was, well, you have a nation-state and you centralize force, a police force or an army, so, "No, you can't do that." The logic, perhaps, of having a single person or a single group able to take out humanity means at some point we're going to have to go this route, at least in some form, no?
CA:全球治理这一名词 现在无疑是最不受待见的, 但是你能否通过历史, 人类的历史来说明 在每一个技术力量上升的阶段, 人们会重组这种力量并加以集中。 比如说 当一群四处为非作歹的罪犯 就能挟持一个社会的时候, 那么对应的手段是国家, 集中力量, 警察部队或一支军队, 站出来说“你不能这么做。” 这里面的逻辑, 让单个人或单个群体 拥有保护人类的能力 意味着在某个时刻 我们将不得不需要走这条路, 至少在某些形式上,不是吗?
NB: It's certainly true that the scale of political organization has increased over the course of human history. It used to be hunter-gatherer band, right, and then chiefdom, city-states, nations, now there are international organizations and so on and so forth. Again, I just want to make sure I get the chance to stress that obviously there are huge downsides and indeed, massive risks, both to mass surveillance and to global governance. I'm just pointing out that if we are lucky, the world could be such that these would be the only ways you could survive a black ball.
NB:在人类的发展历程中 政治组织发展规模之壮大 是毋庸置疑的。 以前曾是捕猎者和采集者群体, 接着是部落、城邦、国家, 现在有国际组织,诸如此类。 我想再次确认的一点是 我有这样的机会去强调 大众监管和全球治理 很明显存在的巨大缺点 以及巨大的风险。 我刚指出的只是如果我们走运, 想要躲过黑球的灾难 也就那么些方法。
CA: The logic of this theory, it seems to me, is that we've got to recognize we can't have it all. That the sort of, I would say, naive dream that many of us had that technology is always going to be a force for good, keep going, don't stop, go as fast as you can and not pay attention to some of the consequences, that's actually just not an option. We can have that. If we have that, we're going to have to accept some of these other very uncomfortable things with it, and kind of be in this arms race with ourselves of, you want the power, you better limit it, you better figure out how to limit it.
CA:这个理论的逻辑, 就我而言, 我们得承认我们不可能无往不胜。 也就是, 我会说, 那种很多人都有的痴心妄想中, 技术永远是一种向善的力量, 继续前进、别停下来、全速前进, 并且不计后果, 其实根本没有这个选择。 我们可以全速发展科技。 但如果我们继续这么发展科技, 我们就将要必须接受 随之而来的 一些非常令人不适的不便, 这有点像和我们自己比赛, 如果你想拥有力量, 你最好能限制它的使用, 你最好想办法如何限制它的使用。
NB: I think it is an option, a very tempting option, it's in a sense the easiest option and it might work, but it means we are fundamentally vulnerable to extracting a black ball. Now, I think with a bit of coordination, like, if you did solve this macrogovernance problem, and the microgovernance problem, then we could extract all the balls from the urn and we'd benefit greatly.
NB:我想这是一种选择, 非常诱人的一种选择, 某种程度上也是最简单的选择, 还可能奏效, 但这也意味着我们本质上很脆弱, 不能承受取出黑球之重。 现在我认为,一定的协调能力 比方说如果真的解决了 宏观管理问题 以及微观治理问题, 那么我们可以一次性 从缸里取出所有的小球。 那么我们将会极其受益。
CA: I mean, if we're living in a simulation, does it matter? We just reboot.
CA:我想,如果我们活在 一个模拟世界中,这又有什么关系? 我们重启就好了。
(Laughter)
(笑声)
NB: Then ... I ...
NB:那……我……
(Laughter) I didn't see that one coming.
(笑声) 我没想到你会这么说。
CA: So what's your view? Putting all the pieces together, how likely is it that we're doomed?
CA:那么你的看法呢? 综合来看, 我们在劫难逃的几率有多高?
(Laughter)
(笑声)
I love how people laugh when you ask that question.
我喜欢当你问这个问题的时候 大家笑成这样。
NB: On an individual level, we seem to kind of be doomed anyway, just with the time line, we're rotting and aging and all kinds of things, right?
NB:在个人的层面, 从时间线上来看 我们终究难逃一劫, 我们在腐烂,在老化, 诸如此类的,对吧?
(Laughter)
(笑声)
It's actually a little bit tricky. If you want to set up so that you can attach a probability, first, who are we? If you're very old, probably you'll die of natural causes, if you're very young, you might have a 100-year -- the probability might depend on who you ask. Then the threshold, like, what counts as civilizational devastation? In the paper I don't require an existential catastrophe in order for it to count. This is just a definitional matter, I say a billion dead, or a reduction of world GDP by 50 percent, but depending on what you say the threshold is, you get a different probability estimate. But I guess you could put me down as a frightened optimist.
事实上很难说。 如果你想通过一个设定来附加概率, 首先要问的是,我们是谁? 如果你年事已高, 很可能你会自然死亡, 如果你还很年轻, 你可能能活到 100 岁—— 这个概率的大小 取决于你问的对象是谁。 接着我们得问,怎样才算是文明毁灭? 在文章里我不需要存在的灾难 来计算概率。 主要看你怎样定义, 我可以说 10 亿的死亡人数, 或者 GDP 下降 50% , 但是一切取决于 你所设定的起点是什么, 起点不同, 所得到的概率估算随之不同。 但是我想你可以把我看做是 一个害怕的乐观主义者吧。
(Laughter)
(笑声)
CA: You're a frightened optimist, and I think you've just created a large number of other frightened ... people.
CA:如果你是一个害怕的乐观主义者, 那我想你刚刚就催生了一帮 同样害怕的…… 人们。
(Laughter)
(笑声)
NB: In the simulation.
NB:在模拟世界里。
CA: In a simulation. Nick Bostrom, your mind amazes me, thank you so much for scaring the living daylights out of us.
CA:在一个模拟世界里。 尼克 · 博斯特罗姆, 你的思维真让我大开眼界, 非常感谢你今天在光天化日之下 把我们大家都吓得不行。
(Applause)
(掌声)