Chris Anderson: Nick Bostrom. So, you have already given us so many crazy ideas out there. I think a couple of decades ago, you made the case that we might all be living in a simulation, or perhaps probably were. More recently, you've painted the most vivid examples of how artificial general intelligence could go horribly wrong. And now this year, you're about to publish a paper that presents something called the vulnerable world hypothesis. And our job this evening is to give the illustrated guide to that. So let's do that. What is that hypothesis?
克里斯.安德森:尼克.博斯特倫。 你已經給了我們好多瘋狂的點子。 我想,幾十年前, 你提出我們都可能 生活在模擬世界當中, 或者是真的。 最近, 你利用最生動的例子 說明了通用人工智慧 可能出現嚴重的問題。 當前, 你準備出版 一篇論文,說明所謂 「脆弱世界的假設」。 今晚,我們要跟大家說明它。 咱們開始吧! 那個假設是什麼?
Nick Bostrom: It's trying to think about a sort of structural feature of the current human condition. You like the urn metaphor, so I'm going to use that to explain it. So picture a big urn filled with balls representing ideas, methods, possible technologies. You can think of the history of human creativity as the process of reaching into this urn and pulling out one ball after another, and the net effect so far has been hugely beneficial, right? We've extracted a great many white balls, some various shades of gray, mixed blessings. We haven't so far pulled out the black ball -- a technology that invariably destroys the civilization that discovers it. So the paper tries to think about what could such a black ball be.
尼克.博斯特倫:它就是嘗試去考慮 目前人類狀況的結構性特徵。 你喜歡甕的比喻吧, 這就讓我用它來說明好了。 想像一個大甕,裡面裝滿了小球, 小球代表理念、方法、潛在的科技。 你可以把人類創意的歷史, 想像成將手伸入這個甕, 把球逐個拿出來的過程, 到目前為止的淨效應是非常好。 我們已經取出了許多白球, 和一些不同深淺灰色的球: 表示好壞參半。 目前我們還沒有拿出黑球來—— 黑球代表是一種科技, 無可避免毀滅發明它的文明。 所以,這篇論文是在試圖 思考黑球可能會是什麼。
CA: So you define that ball as one that would inevitably bring about civilizational destruction.
克:所以你把它定義為 勢必造成毀滅文明的小球。
NB: Unless we exit what I call the semi-anarchic default condition. But sort of, by default.
尼:除非我們能夠離開 我所謂的默認半無政府狀態。 但我們在這常態中。
CA: So, you make the case compelling by showing some sort of counterexamples where you believe that so far we've actually got lucky, that we might have pulled out that death ball without even knowing it. So there's this quote, what's this quote?
克:所以,你說服大家 是提出一些反例, 從中你相信我們目前很走運, 還未在不知不覺中 拿到這個死球。 你的一句引述,是什麼?
NB: Well, I guess it's just meant to illustrate the difficulty of foreseeing what basic discoveries will lead to. We just don't have that capability. Because we have become quite good at pulling out balls, but we don't really have the ability to put the ball back into the urn, right. We can invent, but we can't un-invent. So our strategy, such as it is, is to hope that there is no black ball in the urn.
尼:它的本意是要說明 很難從基本的發現 預測將來會導致什麼後果。 我們其實就是沒有能力。 因為我們很擅長把球拿出來, 卻沒有能力把球放回甕中。 我們可以發明,但卻無法反發明。 看樣子,我們的策略應該是 期望甕中沒有黑球。
CA: So once it's out, it's out, and you can't put it back in, and you think we've been lucky. So talk through a couple of these examples. You talk about different types of vulnerability.
克:所以,一旦把黑球拿出來, 就是不能放回去, 你認為我們運氣很好。 就跟我們談幾個例子吧。 你曾說過有不同類型的脆弱性。
NB: So the easiest type to understand is a technology that just makes it very easy to cause massive amounts of destruction. Synthetic biology might be a fecund source of that kind of black ball, but many other possible things we could -- think of geoengineering, really great, right? We could combat global warming, but you don't want it to get too easy either, you don't want any random person and his grandmother to have the ability to radically alter the earth's climate. Or maybe lethal autonomous drones, massed-produced, mosquito-sized killer bot swarms. Nanotechnology, artificial general intelligence.
尼:最容易理解的類型, 要算容易造成 大量破壞的科技。 合成生物學新穎的想法 可能產出許多毀滅性的黑球, 但也有許多其他可能性—— 想想很棒的地球工程吧! 我們能夠對抗全球暖化, 但也不能太容易, 你不想要任何一個人,甚至自己的祖母 都可以改變地球的氣候。 甚或是致命的自主無人機, 在大量生產像蚊子般大小、 成群的機器殺手。 也包括奈米技術和通用人工智慧。
CA: You argue in the paper that it's a matter of luck that when we discovered that nuclear power could create a bomb, it might have been the case that you could have created a bomb with much easier resources, accessible to anyone.
克:在論文中,你認為 當我們發現可用核能製造炸彈時, 那是一場運氣, 本來也有可能變成是 用唾手可得的簡單資源 可製造炸彈。
NB: Yeah, so think back to the 1930s where for the first time we make some breakthroughs in nuclear physics, some genius figures out that it's possible to create a nuclear chain reaction and then realizes that this could lead to the bomb. And we do some more work, it turns out that what you require to make a nuclear bomb is highly enriched uranium or plutonium, which are very difficult materials to get. You need ultracentrifuges, you need reactors, like, massive amounts of energy. But suppose it had turned out instead there had been an easy way to unlock the energy of the atom. That maybe by baking sand in the microwave oven or something like that you could have created a nuclear detonation. So we know that that's physically impossible. But before you did the relevant physics how could you have known how it would turn out?
尼:是的,回想 30 年代, 我們在核子物理學的領域上, 第一次有重大的突破, 有些天才發現, 有辦法可以製造核能連鎖反應, 接著,他們發現這種技術 可以用來製作炸彈。 我們再盡了一些努力, 結果發現,要製造出一顆核彈, 需要的是高度濃縮的鈾或鈽, 這些是非常難得的材料。 你會需要超高速的離心機, 也需要反應爐,類似大量的能源。 但若果換成 用一種簡單的方法 就可以將原子能量釋放。 好像將沙子放到微波爐中加熱, 那般容易的方法, 你就可以製造核爆炸。 現在我們知道那是不可能的。 但未經相關物理學驗證之前, 你又如何知道結果?
CA: Although, couldn't you argue that for life to evolve on Earth that implied sort of stable environment, that if it was possible to create massive nuclear reactions relatively easy, the Earth would never have been stable, that we wouldn't be here at all.
克:你為什麼不主張 在地球上生命的演化, 需要有某種穩定的環境; 若相對容易就能 製造出大規模的核反應, 地球就不可能穩定, 我們不可能存活至今。
NB: Yeah, unless there were something that is easy to do on purpose but that wouldn't happen by random chance. So, like things we can easily do, we can stack 10 blocks on top of one another, but in nature, you're not going to find, like, a stack of 10 blocks.
尼:是的,除非很容易 刻意地去做某種事, 但那不會偶然發生。 我們很容易做一些事, 比如可能把十塊積木疊起來, 但在大自然中,你不會 看到十塊東西疊在一起。
CA: OK, so this is probably the one that many of us worry about most, and yes, synthetic biology is perhaps the quickest route that we can foresee in our near future to get us here.
克:好,那可能是我們許多人 最擔心的事, 那合成生物學可能是 讓我們在不遠的將來 到達那裡的最快途徑。
NB: Yeah, and so think about what that would have meant if, say, anybody by working in their kitchen for an afternoon could destroy a city. It's hard to see how modern civilization as we know it could have survived that. Because in any population of a million people, there will always be some who would, for whatever reason, choose to use that destructive power. So if that apocalyptic residual would choose to destroy a city, or worse, then cities would get destroyed.
尼:是的,所以, 想想這意味著什麼。 比如,人人在自己的廚房, 花上一個下午的時間, 就能摧毀一個城市。 很難想像我們的現代文明, 在這種情況下能存活。 因為,在任何一百萬人口中, 一定會有些人, 不論出於什麼理由, 選擇使用具毀滅性的力量。 如果那末世餘孽 選擇摧毀一個城市,糟糕的是 那城市真的被摧毀了。
CA: So here's another type of vulnerability. Talk about this.
克:所以,這是另一種脆弱。 跟我們談談這個吧!
NB: Yeah, so in addition to these kind of obvious types of black balls that would just make it possible to blow up a lot of things, other types would act by creating bad incentives for humans to do things that are harmful. So, the Type-2a, we might call it that, is to think about some technology that incentivizes great powers to use their massive amounts of force to create destruction. So, nuclear weapons were actually very close to this, right? What we did, we spent over 10 trillion dollars to build 70,000 nuclear warheads and put them on hair-trigger alert. And there were several times during the Cold War we almost blew each other up. It's not because a lot of people felt this would be a great idea, let's all spend 10 trillion dollars to blow ourselves up, but the incentives were such that we were finding ourselves -- this could have been worse. Imagine if there had been a safe first strike. Then it might have been very tricky, in a crisis situation, to refrain from launching all their nuclear missiles. If nothing else, because you would fear that the other side might do it.
尼:所以,除了 這幾類明顯的黑球 有可能會把很多東西炸掉之外, 其他類型的運作方式 會創造不良的動機, 驅使人類去做有害的事。 所以,我們所謂的 2a 類別, 就是思想有些科技能夠激勵大國 去使用他們強大的力量造成毀滅。 核武就非常接近這一類吧﹗ 我們所做的是花超過十兆美元 建造七萬個核子彈頭, 全處在一觸即發的警戒狀態。 在冷戰期間曾經有好幾次, 幾乎就要把彼此毀掉。 非因許多人認為這是個好點子, 咱們就花十兆美元把自己毀掉, 但激勵我們的,是我們發現 還未發生更糟糕的事情。 想像一下,若有人為自保 不顧一切先開戰。 那就可能會非常難搞了, 在危難的情況中, 很難阻止他們發射 所有的核子飛彈。 但你也害怕另一方可能會同樣做。
CA: Right, mutual assured destruction kept the Cold War relatively stable, without that, we might not be here now.
克:是的,相互制衡的毀滅 讓冷戰狀況變得相對穩定, 我們現在才可在這裡。
NB: It could have been more unstable than it was. And there could be other properties of technology. It could have been harder to have arms treaties, if instead of nuclear weapons there had been some smaller thing or something less distinctive.
尼:原本的情況是更壞。 科技的其他特性, 可能影響更難簽定這個武器條約, 比如用的不是核武, 而是更小或更不顯眼的武器。
CA: And as well as bad incentives for powerful actors, you also worry about bad incentives for all of us, in Type-2b here.
克:除強大的執行者外、 你擔心所有人都會有不良誘因: 這就是 2b 類型。
NB: Yeah, so, here we might take the case of global warming. There are a lot of little conveniences that cause each one of us to do things that individually have no significant effect, right? But if billions of people do it, cumulatively, it has a damaging effect. Now, global warming could have been a lot worse than it is. So we have the climate sensitivity parameter, right. It's a parameter that says how much warmer does it get if you emit a certain amount of greenhouse gases. But, suppose that it had been the case that with the amount of greenhouse gases we emitted, instead of the temperature rising by, say, between three and 4.5 degrees by 2100, suppose it had been 15 degrees or 20 degrees. Like, then we might have been in a very bad situation. Or suppose that renewable energy had just been a lot harder to do. Or that there had been more fossil fuels in the ground.
尼:是的,這方面 可以用全球暖化當例子。 有很多小小的便利, 讓我們每個人都很容易做這些事情, 單獨來看,並沒有顯著的影響吧﹗ 但當數十億人都這麼做時, 日積月累,就產生破壞性的影響。 全球暖化的情況, 可能比本來的更糟。 所以,我們有氣候敏感度參數。 這個參數代表的是, 你釋放的溫室氣體 與氣溫上升成正比。 但假設發生的狀況是, 我們釋放了這麼大量的溫室氣體, 比如,溫度並沒有在 2100 年 預計的範圍內上升 3 ~ 4.5 度, 假如是跳升至 15 或 20 度。 那我們就糟糕了。 又假設可再生能源計劃, 比預期的更難實現。 或者是地底下有更多的化石燃料。
CA: Couldn't you argue that if in that case of -- if what we are doing today had resulted in 10 degrees difference in the time period that we could see, actually humanity would have got off its ass and done something about it. We're stupid, but we're not maybe that stupid. Or maybe we are.
克:你可否抗辯以下的情況—— 如果我們現在的行為 在我們能預見的期間 造成了 10 度的差距, 那人類就會醒覺為此做點事。 我們是笨的,但也未必那麼笨。 克:也許是。 尼:我就不敢賭。
NB: I wouldn't bet on it.
(笑聲)
(Laughter)
You could imagine other features. So, right now, it's a little bit difficult to switch to renewables and stuff, right, but it can be done. But it might just have been, with slightly different physics, it could have been much more expensive to do these things.
可以想像其他特徵。 目前,有點難轉換到 可再生能源或其它東西, 但這是可行的。 本來是有可能, 但若物理學上有些微的不同, 要做到這些事,就可能會昂貴許多。
CA: And what's your view, Nick? Do you think, putting these possibilities together, that this earth, humanity that we are, we count as a vulnerable world? That there is a death ball in our future?
克:尼克,你的看法是否認為, 把這些可能性結合起來, 包括這個地球、我們人類, 就算是個脆弱的世界嗎? 也可以算是未來的一顆死亡黑球嗎?
NB: It's hard to say. I mean, I think there might well be various black balls in the urn, that's what it looks like. There might also be some golden balls that would help us protect against black balls. And I don't know which order they will come out.
尼:這很難說。 我認為在甕中會有幾個不同的黑球, 看起來是這樣。 可能也會有一些金球, 能協助保護我們,對抗黑球。 我不知道這些球 會以什麼順序出現。
CA: I mean, one possible philosophical critique of this idea is that it implies a view that the future is essentially settled. That there either is that ball there or it's not. And in a way, that's not a view of the future that I want to believe. I want to believe that the future is undetermined, that our decisions today will determine what kind of balls we pull out of that urn.
克:對這個想法, 可能會有一種哲學上的批評, 觀點就是,未來的情況, 基本上是已經定了。 黑球有就有,沒有就沒有。 在某種意義上, 我並不想要相信這種未來的觀點。 我相信未來是尚未被決定的, 我們現今的取向會決定 我們從甕中取出哪種球。
NB: I mean, if we just keep inventing, like, eventually we will pull out all the balls. I mean, I think there's a kind of weak form of technological determinism that is quite plausible, like, you're unlikely to encounter a society that uses flint axes and jet planes. But you can almost think of a technology as a set of affordances. So technology is the thing that enables us to do various things and achieve various effects in the world. How we'd then use that, of course depends on human choice. But if we think about these three types of vulnerability, they make quite weak assumptions about how we would choose to use them. So a Type-1 vulnerability, again, this massive, destructive power, it's a fairly weak assumption to think that in a population of millions of people there would be some that would choose to use it destructively.
尼:如果我們持繼創造和發明, 我們最終會把所有的球都取出來。 我認為科技決定論有種較弱的形式, 是滿有道理的, 像是,你不太可能會遇到一個社會 會同時使用燧石斧和噴射機。 但你幾乎可以把一項科技 想成是一組功能。 所以,科技能讓我們去做各種事, 在世上達成不同效果。 當然,如何使用它 就是人類的選擇了。 但我們想想這三種類型的弱點, 苐一個假設的脆弱性, 是選擇如何使用科技。 所以,這個脆弱的地方, 是在於巨大力量的毀滅性, 假設在數百萬人中, 有人利用科技作毀滅性用途, 這是我們人類相當薄弱的一環。
CA: For me, the most single disturbing argument is that we actually might have some kind of view into the urn that makes it actually very likely that we're doomed. Namely, if you believe in accelerating power, that technology inherently accelerates, that we build the tools that make us more powerful, then at some point you get to a stage where a single individual can take us all down, and then it looks like we're screwed. Isn't that argument quite alarming?
克:對我來說最受不了的論點, 就是我們覺得這個甕裡面的東西, 很可能意味著我們會完蛋。 也就是說,如果你相信科技的加速力, 也就是科技加速的本質, 我們建造這些讓我們更強大的工具, 導致某人某時可能 把我們全部拖垮, 讓我們陷入已經完蛋的窘境。 這樣的論點不是一個警訊嗎?
NB: Ah, yeah.
尼:嗯,沒錯。
(Laughter)
(笑聲)
I think -- Yeah, we get more and more power, and [it's] easier and easier to use those powers, but we can also invent technologies that kind of help us control how people use those powers.
我覺得—— 沒錯,我們的力量越來越強大, 運用這些力量也越容易, 但也可發明一些科技, 幫助我們控制人們 如何運用這些力量。
CA: So let's talk about that, let's talk about the response. Suppose that thinking about all the possibilities that are out there now -- it's not just synbio, it's things like cyberwarfare, artificial intelligence, etc., etc. -- that there might be serious doom in our future. What are the possible responses? And you've talked about four possible responses as well.
克:我們聊聊如何應對吧。 假設現有的所有可能性—— 不只是合成生物學, 其他像是網路戰、 人工智慧等等—— 在未來可能帶來嚴重性的毀滅。 又有哪些應對的方法呢? 你也提到了四個可能的應對方法。
NB: Restricting technological development doesn't seem promising, if we are talking about a general halt to technological progress. I think neither feasible, nor would it be desirable even if we could do it. I think there might be very limited areas where maybe you would want slower technological progress. You don't, I think, want faster progress in bioweapons, or in, say, isotope separation, that would make it easier to create nukes.
尼:限制科技發展,似乎用處不大, 或無條件停止科技發展。 甚至於我們有能力做到, 都不可能實現。 有少數幾個領域, 你或許會希望科技的發展 來得慢一點。 我覺得,你不會希望生物武器, 或同位素分離技術, 促成核子武器的,加快的發展。
CA: I mean, I used to be fully on board with that. But I would like to actually push back on that for a minute. Just because, first of all, if you look at the history of the last couple of decades, you know, it's always been push forward at full speed, it's OK, that's our only choice. But if you look at globalization and the rapid acceleration of that, if you look at the strategy of "move fast and break things" and what happened with that, and then you look at the potential for synthetic biology, I don't know that we should move forward rapidly or without any kind of restriction to a world where you could have a DNA printer in every home and high school lab. There are some restrictions, right?
克:我以前完全支持這個論點。 但我想反思一下。 首先,就過去至少數十年的歷史來看, 科技一直是以全速發展。 只是我們的選擇而已。 但如果你加速全球化的情形, 以「快速行動, 打破成規」的策略來看, 是發生了什麼事, 接著再看看合成生物學的潛力, 我不知道我們是否應該 無限制地快速前進, 造就每個家庭和高中實驗室 都擁有一部 DNA 印表機的世界。 限制還是應該存在的。
NB: Possibly, there is the first part, the not feasible. If you think it would be desirable to stop it, there's the problem of feasibility. So it doesn't really help if one nation kind of --
尼:或者第一部分是不可行。 如果你認為應該要阻止它, 那就存在可行性的問題。 所以如果只有一個國家 是沒有幫助——
CA: No, it doesn't help if one nation does, but we've had treaties before. That's really how we survived the nuclear threat, was by going out there and going through the painful process of negotiating. I just wonder whether the logic isn't that we, as a matter of global priority, we shouldn't go out there and try, like, now start negotiating really strict rules on where synthetic bioresearch is done, that it's not something that you want to democratize, no?
克:我同意, 但我們之前是有條約約束的。 經歷了痛苦的談判過程, 讓我們在核武的威脅下仍然存活。 我只想知道,邏輯是: 作為全球的優先事項, 我們不是應該試著, 比方說,就合成生物研究 的方向做談判, 這不是你想要的民主化東西嗎?
NB: I totally agree with that -- that it would be desirable, for example, maybe to have DNA synthesis machines, not as a product where each lab has their own device, but maybe as a service. Maybe there could be four or five places in the world where you send in your digital blueprint and the DNA comes back, right? And then, you would have the ability, if one day it really looked like it was necessary, we would have like, a finite set of choke points. So I think you want to look for kind of special opportunities, where you could have tighter control.
尼:我完全同意—— 可取的方法是,比方說, 或許不是每個實驗室 都有一台 DNA 合成的機器, 而是用服務的形式存在。 或許你把數位基因模型寄到四五處, 就可收到合成的基因圖譜了。 這樣一來,你有能力, 在萬一有需要的話, 就可以加以制約。 所以我認為應該要找到一些 能夠有效控制的特殊方法。
CA: Your belief is, fundamentally, we are not going to be successful in just holding back. Someone, somewhere -- North Korea, you know -- someone is going to go there and discover this knowledge, if it's there to be found.
克:基本上你的信念是對的, 但只用拖後腿的方法,不會成功。 某地某人——在北韓—— 如果在那裡發現一些知識, 如果能找到的話。
NB: That looks plausible under current conditions. It's not just synthetic biology, either. I mean, any kind of profound, new change in the world could turn out to be a black ball.
尼:就目前是有可能的。 也不單單是合成生物學。 出現任何一種深邃的新改變 都可能成為黑球。
CA: Let's look at another possible response.
克:且談另一個應對吧﹗
NB: This also, I think, has only limited potential. So, with the Type-1 vulnerability again, I mean, if you could reduce the number of people who are incentivized to destroy the world, if only they could get access and the means, that would be good.
尼:這個應對方法的潛力也有限。 回到第一型的脆弱去, 如果你可以減少毀滅世界的人數, 那受激動的人,只能接觸 良善的方式和通道, 是不錯的做法。 克:你要求我們製作的這張圖,
CA: In this image that you asked us to do you're imagining these drones flying around the world with facial recognition. When they spot someone showing signs of sociopathic behavior, they shower them with love, they fix them.
設想是擁有臉部辨識能力的 無人機在全世界飛行。 當它們偵測到某些人 有反社會行為的跡象, 給他們沐浴以愛, 導正他們。
NB: I think it's like a hybrid picture. Eliminate can either mean, like, incarcerate or kill, or it can mean persuade them to a better view of the world. But the point is that, suppose you were extremely successful in this, and you reduced the number of such individuals by half. And if you want to do it by persuasion, you are competing against all other powerful forces that are trying to persuade people, parties, religion, education system. But suppose you could reduce it by half, I don't think the risk would be reduced by half. Maybe by five or 10 percent.
尼:我想這是一幅混合圖。 消除可以意味著監禁,或是殺戮, 也可以解釋成說服他們 欣賞那美好的世界。 重點是,假設這個做法非常成功, 把這些人的數量減少了一半。 如果你想用說服的方法, 那就是抗衡其他強大力量, 也同樣嘗試說服民眾、團體、 宗教和教育體系。 假設你能成功減少一半人數, 但我也不認為風險會減半。 也許只是百分之五或十。
CA: You're not recommending that we gamble humanity's future on response two.
克:你不建議我們把人性的未來
NB: I think it's all good to try to deter and persuade people, but we shouldn't rely on that as our only safeguard.
押在第二個應對方法上。 尼:我認為阻止或說服人們 是沒有問題的, 但我們不應該以此
作唯一的保障。
CA: How about three?
克:應對三呢?
NB: I think there are two general methods that we could use to achieve the ability to stabilize the world against the whole spectrum of possible vulnerabilities. And we probably would need both. So, one is an extremely effective ability to do preventive policing. Such that you could intercept. If anybody started to do this dangerous thing, you could intercept them in real time, and stop them. So this would require ubiquitous surveillance, everybody would be monitored all the time.
尼:一般來說可有兩種方法, 用在穩定世界、 填補可能發生的漏洞。 兩個方法都可能用到。 其中一個極度有效地 執行防範及維安的任務。 這就可以阻止出現差錯。 如果有人從事危害的行動, 就可以在第一時間阻止他們。 這需要全面性的監控, 所有人無時無刻都被監視著。
CA: This is "Minority Report," essentially, a form of.
克:在某種形式上, 就像電影『關鍵報告』。
NB: You would have maybe AI algorithms, big freedom centers that were reviewing this, etc., etc.
尼:你可能用人工智能演算法、 在大型的自由中心 來檢視這些東西等等。
CA: You know that mass surveillance is not a very popular term right now?
克:你知道大規模監控這個詞
現在不太受歡迎。
(Laughter)
(笑聲)
NB: Yeah, so this little device there, imagine that kind of necklace that you would have to wear at all times with multidirectional cameras. But, to make it go down better, just call it the "freedom tag" or something like that.
尼:這個小型裝置, 想像你要一直戴著這種 配備多角度攝影機的項鍊。 但為了讓它更容易被大眾接受, 稱它為「自由標記」之類。
(Laughter)
(笑聲)
CA: OK. I mean, this is the conversation, friends, this is why this is such a mind-blowing conversation.
克:好的。 各位,對談就是這樣, 可以出現這段驚嚇的對話。
NB: Actually, there's a whole big conversation on this on its own, obviously. There are huge problems and risks with that, right? We may come back to that. So the other, the final, the other general stabilization capability is kind of plugging another governance gap. So the surveillance would be kind of governance gap at the microlevel, like, preventing anybody from ever doing something highly illegal. Then, there's a corresponding governance gap at the macro level, at the global level. You would need the ability, reliably, to prevent the worst kinds of global coordination failures, to avoid wars between great powers, arms races, cataclysmic commons problems, in order to deal with the Type-2a vulnerabilities.
尼:針對這個議題 明顯地就有一場大討論。 有很大的問題跟風險存在吧﹗ 我們可回頭再談。 另外一個穩定世界的方法, 在某種程度上 填補了治理差距。 監控治理差距,就是在微觀層面上, 防範有人做出極端的違法行為。 相對地,在巨觀層面, 也就是世界層面上, 也存在治理差距。 你需要有可靠的能力 來防止全球最壞的一種—— 協調上的失敗, 避免大國的互相爭鬥、 軍備競賽、災難性的公共問題, 來處理 2a 型的弱點。
CA: Global governance is a term that's definitely way out of fashion right now, but could you make the case that throughout history, the history of humanity is that at every stage of technological power increase, people have reorganized and sort of centralized the power. So, for example, when a roving band of criminals could take over a society, the response was, well, you have a nation-state and you centralize force, a police force or an army, so, "No, you can't do that." The logic, perhaps, of having a single person or a single group able to take out humanity means at some point we're going to have to go this route, at least in some form, no?
克:全球治理是個 已經退流行的名詞了, 但你能夠提出充分的理由來說明 在人類的歷史上, 每個提升科技力量的階段 人們都會重組及集中權力。 比方說,如果有一個四處犯案的集團 有機會掌控社會, 你有一個民族國家, 權力集中在警察或軍隊上, 然後說:「你們不能這麼做。」 其中的邏輯或許就是, 當個人或群體在人性抹滅的時候, 意味著在某個時點, 至少在某個層面上 不是得走上這條路嗎?
NB: It's certainly true that the scale of political organization has increased over the course of human history. It used to be hunter-gatherer band, right, and then chiefdom, city-states, nations, now there are international organizations and so on and so forth. Again, I just want to make sure I get the chance to stress that obviously there are huge downsides and indeed, massive risks, both to mass surveillance and to global governance. I'm just pointing out that if we are lucky, the world could be such that these would be the only ways you could survive a black ball.
尼:這完全正確,政治組織的規模 在人類歷史上是持續成長的。 以前是獵人、採集者的群體, 然後是酋邦、城邦、國家, 現在則有國際組織,諸如此類。 我想重申一次,藉著這個機會強調 大規模監控和全球治理 都有很明顯的缺點, 也確實有巨大的風險。 我想強調的重點是 若夠幸運的話, 這些是我們能唯一
撐過黑球出現的方法。
CA: The logic of this theory, it seems to me, is that we've got to recognize we can't have it all. That the sort of, I would say, naive dream that many of us had that technology is always going to be a force for good, keep going, don't stop, go as fast as you can and not pay attention to some of the consequences, that's actually just not an option. We can have that. If we have that, we're going to have to accept some of these other very uncomfortable things with it, and kind of be in this arms race with ourselves of, you want the power, you better limit it, you better figure out how to limit it.
克:這個理論的邏輯 對我來說似乎是 我們得認知是我們不能大小通吃。 我想說很多人都有 那些天真的夢, 以為科技永遠是股正面的力量, 持續向前,永不停歇, 可用全速前進, 而忽略了造成的後果, 這些後果肯定會發生。 我們當然可以全力發展科技。 但如果我們這麽做的話, 就得接受一些令人難受的後果。 就好比我們的軍備競賽, 如果你想要權力, 就要想辦法,去控制如何使用它。
NB: I think it is an option, a very tempting option, it's in a sense the easiest option and it might work, but it means we are fundamentally vulnerable to extracting a black ball. Now, I think with a bit of coordination, like, if you did solve this macrogovernance problem, and the microgovernance problem, then we could extract all the balls from the urn and we'd benefit greatly.
尼:我認為這是一條路, 一條很誘人的路, 某種意義上,是一條最好走 又容易成功的路, 但也表示我們基本上 可能抽到黑球的脆弱性。 好,我覺得透過協調, 例如你解決了巨觀治理 和微觀治理的問題, 我們就可以從甕裡抽出所有的小球, 並獲得很大的好處。
CA: I mean, if we're living in a simulation, does it matter? We just reboot.
克:我是說,就算我們生活在 一個模擬世界當中,也沒關係吧﹗ 重開機就好了。
(Laughter)
(笑聲)
NB: Then ... I ...
尼:那 ··· 我 ···
(Laughter) I didn't see that one coming.
(笑聲) 我倒是沒想到這個。
CA: So what's your view? Putting all the pieces together, how likely is it that we're doomed?
克:那你的想法是什麼? 總括來看,你覺得我們 完蛋的機率有多大?
(Laughter)
(笑聲)
I love how people laugh when you ask that question.
我喜歡大家對我這問題的笑聲。
NB: On an individual level, we seem to kind of be doomed anyway, just with the time line, we're rotting and aging and all kinds of things, right?
尼:以單一個體, 我們無論如何都會完蛋, 是早晚的問題罷了。 身體凋零、老化和發生其他事情。
(Laughter)
(笑聲)
It's actually a little bit tricky. If you want to set up so that you can attach a probability, first, who are we? If you're very old, probably you'll die of natural causes, if you're very young, you might have a 100-year -- the probability might depend on who you ask. Then the threshold, like, what counts as civilizational devastation? In the paper I don't require an existential catastrophe in order for it to count. This is just a definitional matter, I say a billion dead, or a reduction of world GDP by 50 percent, but depending on what you say the threshold is, you get a different probability estimate. But I guess you could put me down as a frightened optimist.
問題有點棘手。 要假設能夠計算機率的情境, 首先要問我們是誰? 如果你很老的話,很可能自然死亡, 如果你很年輕, 你可能有 100 年生命—— 機率跟詢問對象有關。 然後是閾值,例如怎樣 才算是文明毀滅了? 在我這篇論文裡面, 不一定要確實的災難, 才計算這種情況。 這只是定義的問題, 例如十億人死亡, 或是 GDP 下降 50%, 但是根據你所設定的閾值, 你會得到不同的估計機率。 我想你可以笑我 是一個被嚇壞的樂天派。
(Laughter)
(笑聲)
CA: You're a frightened optimist, and I think you've just created a large number of other frightened ... people.
克:的確是, 而且你還把另外一大群人嚇壞了。 (笑聲)
(Laughter)
尼:在模擬情境當中。
NB: In the simulation.
克:在模擬情境當中。
CA: In a simulation. Nick Bostrom, your mind amazes me, thank you so much for scaring the living daylights out of us.
尼克.博斯特倫, 你的想法讓我吃驚。 非常感謝你 把我們嚇得屁滾尿流。 (掌聲)
(Applause)