I'm going to talk about a failure of intuition that many of us suffer from. It's really a failure to detect a certain kind of danger. I'm going to describe a scenario that I think is both terrifying and likely to occur, and that's not a good combination, as it turns out. And yet rather than be scared, most of you will feel that what I'm talking about is kind of cool.
我要談一種我們很多人 遭受的、直覺上的失誤。 那其實是一種使你無法察覺到 特定種類危險的失誤。 我會描述一個情境 是我認為很可怕 而且很有機會發生的, 這不是很好的組合, 一如預期。 然而比起感到害怕, 大部分的人會覺得 我正在說的東西有點酷。
I'm going to describe how the gains we make in artificial intelligence could ultimately destroy us. And in fact, I think it's very difficult to see how they won't destroy us or inspire us to destroy ourselves. And yet if you're anything like me, you'll find that it's fun to think about these things. And that response is part of the problem. OK? That response should worry you. And if I were to convince you in this talk that we were likely to suffer a global famine, either because of climate change or some other catastrophe, and that your grandchildren, or their grandchildren, are very likely to live like this, you wouldn't think, "Interesting. I like this TED Talk."
我將會描述我們在 人工智能領域的進展 如何能最終消滅我們。 事實上,我認為很難看不出 他們為何不會消滅我們, 或者驅使我們消滅自己。 如果你是和我類似的人, 你會發現思考這類事情很有趣。 那種反應也是問題的一部分。 對嗎?那種反應應該讓你感到擔心。 如果我是打算在這個裡演講說服你, 我們很可能會遭受全球性的飢荒, 無論是因為氣候變遷或某種大災難, 而你的孫子們或者孫子的孫子們 非常可能要這樣生活, 你不會覺得: 「有意思, 我喜歡這個 TED 演講。」
Famine isn't fun. Death by science fiction, on the other hand, is fun, and one of the things that worries me most about the development of AI at this point is that we seem unable to marshal an appropriate emotional response to the dangers that lie ahead. I am unable to marshal this response, and I'm giving this talk.
飢荒並不有趣。 另一方面來說, 科幻式的死亡,是有趣的。 而現階段人工智能的發展 最讓我擔心的是, 我們似乎無法組織出 一個適當的情緒反應, 針對眼前的威脅。 我無法組織出這個回應, 可是我在這裡講這個。
It's as though we stand before two doors. Behind door number one, we stop making progress in building intelligent machines. Our computer hardware and software just stops getting better for some reason. Now take a moment to consider why this might happen. I mean, given how valuable intelligence and automation are, we will continue to improve our technology if we are at all able to. What could stop us from doing this? A full-scale nuclear war? A global pandemic? An asteroid impact? Justin Bieber becoming president of the United States?
就像我們站在兩扇門前面。 一號門後面, 我們停止發展製造有智能的機器。 我們的電腦硬體和軟體 就因故停止變得更好。 現在花一點時間想想 為什麼這會發生。 我的意思是,人工智能 和自動化如此有價值, 我們會持續改善我們的科技, 只要我們有能力做。 有什麼東西能阻止我們這麼做呢? 一場全面性的核子戰爭? 一場全球性的流行病? 一次小行星撞擊地球? 小賈斯汀成為美國總統?
(Laughter)
(笑聲)
The point is, something would have to destroy civilization as we know it. You have to imagine how bad it would have to be to prevent us from making improvements in our technology permanently, generation after generation. Almost by definition, this is the worst thing that's ever happened in human history.
重點是:必須有什麼東西 會毀滅我們所知的文明。 你必須想像到底能有多糟 才能阻止我們持續改善我們的科技, 永久地, 一代又一代人。 幾乎從定義上,這就是 人類歷史上發生過的最糟的事。
So the only alternative, and this is what lies behind door number two, is that we continue to improve our intelligent machines year after year after year. At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves. And then we risk what the mathematician IJ Good called an "intelligence explosion," that the process could get away from us.
所以唯一的替代選項, 這是在二號門之後的東西, 是我們繼續改善我們的智能機器, 年復一年,年復一年。 到某個時間點,我們會造出 比我們還聰明的機器, 而我們一旦造出比我們聰明的機器, 它們就會開始改善自己。 然後我們承擔數學家 IJ Good 稱為 「人工智能爆發」的風險, 那個過程會脫離我們的掌握。
Now, this is often caricatured, as I have here, as a fear that armies of malicious robots will attack us. But that isn't the most likely scenario. It's not that our machines will become spontaneously malevolent. The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.
這時常被漫畫化,如我的這張圖, 一種恐懼:充滿惡意的機械人軍團 會攻擊我們。 但這不是最可能發生的情境。 並不是說我們的機器會變得 自然地帶有敵意。 問題在於我們將會造出 遠比我們更有競爭力的機器, 只要我們和他們的目標 有些微的歧異, 就會讓我們被毀滅。
Just think about how we relate to ants. We don't hate them. We don't go out of our way to harm them. In fact, sometimes we take pains not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let's say when constructing a building like this one, we annihilate them without a qualm. The concern is that we will one day build machines that, whether they're conscious or not, could treat us with similar disregard.
就想想我們和螞蟻的關係。 我們不討厭牠們。 我們不會特別去傷害牠們。 甚至有時我們為了 不傷害牠們而承受痛苦。 我們在人行道跨越他們。 但當牠們的存在 和我們的目標嚴重衝突, 譬如當我們要建造一棟 和這裡一樣的建築物, 我們會毫無不安地除滅牠們。 問題在於有一天我們會造出機器, 無論他們是有意識或者沒有意識, 會對我們如螞蟻般的不予理會。
Now, I suspect this seems far-fetched to many of you. I bet there are those of you who doubt that superintelligent AI is possible, much less inevitable. But then you must find something wrong with one of the following assumptions. And there are only three of them.
現在,我懷疑這種說法 對這裡大部分的人來說不著邊際。 我確信你們有些人懷疑 超級人工智能出現的可能, 更別說它必然出現。 但接著你一點會發現 接下來其中一個假設有點問題。 以下只有三個假設。
Intelligence is a matter of information processing in physical systems. Actually, this is a little bit more than an assumption. We have already built narrow intelligence into our machines, and many of these machines perform at a level of superhuman intelligence already. And we know that mere matter can give rise to what is called "general intelligence," an ability to think flexibly across multiple domains, because our brains have managed it. Right? I mean, there's just atoms in here, and as long as we continue to build systems of atoms that display more and more intelligent behavior, we will eventually, unless we are interrupted, we will eventually build general intelligence into our machines.
智能是關於資訊 在物質系統裡處理的過程。 其實這個陳述已經不只是一個假設, 我們已經在我們的機器裡 安裝了有限的智能, 而很多這樣的機器已經表現出 某種程度的超人類智能。 而我們知道這個現象 可能導致被稱為「通用智能」的東西, 一種能跨多個領域靈活思考的能力, 因為我們的腦 已經掌握了這個,對吧? 我的意思是,裡面都只是原子, 只要我們繼續製造基於原子的系統, 越來越能表現智能的行為, 我們終究會,除非我們被打斷, 我們終究會造出通用智能 裝進我們的機器裡。
It's crucial to realize that the rate of progress doesn't matter, because any progress is enough to get us into the end zone. We don't need Moore's law to continue. We don't need exponential progress. We just need to keep going.
關鍵是理解到發展的速率無關緊要, 因為任何進展都足以 帶我們到終結之境。 我們不需要摩爾定律才能繼續。 我們不需要指數型的發展。 我們只需要繼續前進。
The second assumption is that we will keep going. We will continue to improve our intelligent machines. And given the value of intelligence -- I mean, intelligence is either the source of everything we value or we need it to safeguard everything we value. It is our most valuable resource. So we want to do this. We have problems that we desperately need to solve. We want to cure diseases like Alzheimer's and cancer. We want to understand economic systems. We want to improve our climate science. So we will do this, if we can. The train is already out of the station, and there's no brake to pull.
第二個假設是我們會繼續前進。 我們會持續改善我們的智能機器。 而因為智能的價值── 我的意思是,智能是所有 我們珍視的事物的源頭, 或者我們需要智能 來保護我們珍視的事物。 智能是我們最珍貴的資源。 所以我們想要這麼做。 我們有許多亟需解決的問題。 我們想要治癒疾病 如阿茲海默症和癌症。 我們想要了解經濟系統。 我們想要改進我們的氣候科學。 所以我們會這麼做,只要我們可以。 火車已經出站,而沒有煞車可以拉。
Finally, we don't stand on a peak of intelligence, or anywhere near it, likely. And this really is the crucial insight. This is what makes our situation so precarious, and this is what makes our intuitions about risk so unreliable.
最後一點,我們不站在智能的巔峰, 或者根本不在那附近。 而這真的是一種重要的洞察。 正是這個讓我們的處境如此危險可疑, 這也讓我們對風險的直覺 變得很不可靠。
Now, just consider the smartest person who has ever lived. On almost everyone's shortlist here is John von Neumann. I mean, the impression that von Neumann made on the people around him, and this included the greatest mathematicians and physicists of his time, is fairly well-documented. If only half the stories about him are half true, there's no question he's one of the smartest people who has ever lived. So consider the spectrum of intelligence. Here we have John von Neumann. And then we have you and me. And then we have a chicken.
現在,想想這世界上最聰明的人。 每個人的清單上幾乎都會有 約翰·馮·諾伊曼。 我是指, 馮·諾伊曼 對他周圍的人造成的印象, 而這包括和他同時代 最棒的數學家和物理學家, 被好好地記錄了。 只要有一半關於他的故事一半是真的, 那毫無疑問 他是世界上活過最聰明的人之一。 所以考慮智能的頻譜。 約翰·馮·諾伊曼在這裡。 然後你和我在這裡。 然後雞在這裡。
(Laughter)
(笑聲)
Sorry, a chicken.
抱歉,雞應該在那裡。
(Laughter)
(笑聲)
There's no reason for me to make this talk more depressing than it needs to be.
我實在無意把這個演講 弄得比它本身更讓人感到沮喪。
(Laughter)
(笑聲)
It seems overwhelmingly likely, however, that the spectrum of intelligence extends much further than we currently conceive, and if we build machines that are more intelligent than we are, they will very likely explore this spectrum in ways that we can't imagine, and exceed us in ways that we can't imagine.
智能的頻譜似乎勢不可擋地 往比我們能理解的更遠的地方延伸, 如果我們造出 比我們更有智能的機器, 他們很可能會探索這個頻譜, 以我們無法想像的方式, 然後以我們無法想像的方式超越我們。
And it's important to recognize that this is true by virtue of speed alone. Right? So imagine if we just built a superintelligent AI that was no smarter than your average team of researchers at Stanford or MIT. Well, electronic circuits function about a million times faster than biochemical ones, so this machine should think about a million times faster than the minds that built it. So you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week. How could we even understand, much less constrain, a mind making this sort of progress?
重要的是認識到這說法 僅因速度的優勢即為真。 對吧?請想像如果我們造出了 一個超級人工智能, 它不比你一般在史丹佛或麻省理工 遇到的研究團隊聰明。 電子電路作用的速率 比起生化作用快一百萬倍, 所以這個機器思考應該 比製造它的心智快一百萬倍。 如果你設定讓它運行一星期, 他會執行人類兩萬年的智能工作, 一週接著一週接著一週。 我們如何可能理解,較不嚴格地說, 一個達成如此進展的心智?
The other thing that's worrying, frankly, is that, imagine the best case scenario. So imagine we hit upon a design of superintelligent AI that has no safety concerns. We have the perfect design the first time around. It's as though we've been handed an oracle that behaves exactly as intended. Well, this machine would be the perfect labor-saving device. It can design the machine that can build the machine that can do any physical work, powered by sunlight, more or less for the cost of raw materials. So we're talking about the end of human drudgery. We're also talking about the end of most intellectual work.
另一個另人擔心的事,老實說, 是想像最好的情況。 想像我們想到一個沒有安全顧慮的 超級人工智能的設計, 我們第一次就做出了完美的設計。 如同我們被給予了一個神諭, 完全照我們的預期地動作。 這個機器會是完美的人力節約裝置。 它能設計一個機器, 那機器能製造出能做任何人工的機器, 以太陽能驅動, 幾乎只需要原料的成本。 所以我們是在談人類苦役的終結。 我們也是在談大部分 智力工作的終結。
So what would apes like ourselves do in this circumstance? Well, we'd be free to play Frisbee and give each other massages. Add some LSD and some questionable wardrobe choices, and the whole world could be like Burning Man.
像我們一樣的猩猩 在這種情況下會做什麼? 我們可能可以自由地 玩飛盤和互相按摩。 加上一點迷幻藥和可議的服裝選擇, 整個世界都可以像在過火人祭典。
(Laughter)
(笑聲)
Now, that might sound pretty good, but ask yourself what would happen under our current economic and political order? It seems likely that we would witness a level of wealth inequality and unemployment that we have never seen before. Absent a willingness to immediately put this new wealth to the service of all humanity, a few trillionaires could grace the covers of our business magazines while the rest of the world would be free to starve.
那聽起來也許很不錯, 但請問,在我們目前的經濟和政治 秩序下,會發生什麼事情? 我們很可能會見證 一種我們從未見過的 財富不均和失業程度。 缺乏一種意願來把這份新財富馬上 放在服務全人類, 少數幾個萬億富翁 能登上我們的財經雜誌, 而其他人可以自由地選擇挨餓。
And what would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a superintelligent AI? This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power. This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum. So it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk.
而俄國和中國會怎麼做? 當他們聽說矽谷的某個公司 即將部署一個超級人工智能, 這個機器能夠發動戰爭, 無論是領土侵略或者網路電子戰, 以前所未見的威力。 這是個贏者全拿的劇本。 在這個競爭領先六個月 等於領先五十萬年, 最少。 所以即使僅僅是這種突破的謠言 都能使我們這個種族走向狂暴。
Now, one of the most frightening things, in my view, at this moment, are the kinds of things that AI researchers say when they want to be reassuring. And the most common reason we're told not to worry is time. This is all a long way off, don't you know. This is probably 50 or 100 years away. One researcher has said, "Worrying about AI safety is like worrying about overpopulation on Mars." This is the Silicon Valley version of "don't worry your pretty little head about it."
現在,最讓人驚恐的事情, 在我的看法,在這個時刻, 是人工智慧研究者在試著表現得 讓人安心時說的那類話。 而最常用來告訴我們 現在不要擔心的理由是時間。 這還有很長的路要走,你不知道嗎, 起碼還要 50 到 100 年。 一個研究人員曾說, 「憂心人工智慧安全 如同憂心火星人口爆炸。」 這是矽谷版本的 「別杞人憂天。」
(Laughter)
(笑聲)
No one seems to notice that referencing the time horizon is a total non sequitur. If intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of superintelligence. And we have no idea how long it will take us to create the conditions to do that safely. Let me say that again. We have no idea how long it will take us to create the conditions to do that safely.
似乎沒人注意到 以時間當參考 是一個不合理的推論。 如果智能只是關於資訊的處理, 而我們持續改善我們的機器, 我們會製作出某種形式的超級智能。 而且我們不知道要花我們多長的時間 來創造安全地這麼做的條件。 讓我再說一次, 我們不知道要花我們多長的時間 來創造安全地這麼做的條件。
And if you haven't noticed, 50 years is not what it used to be. This is 50 years in months. This is how long we've had the iPhone. This is how long "The Simpsons" has been on television. Fifty years is not that much time to meet one of the greatest challenges our species will ever face. Once again, we seem to be failing to have an appropriate emotional response to what we have every reason to believe is coming.
而且如果你還沒注意到, 50 年已經不像以前的概念。 這是 50 年以月來表示。 這是我們有了 iPhone 的時間。 這是《辛普森家庭》 在電視上播映的時間。 50 年不是那麼長的時間 來面對對我們這個種族來說 最巨大的挑戰之一。 再一次說,我們似乎 無法產生適當的情緒反應, 對應我們有所有的理由 相信將發生的事。
The computer scientist Stuart Russell has a nice analogy here. He said, imagine that we received a message from an alien civilization, which read: "People of Earth, we will arrive on your planet in 50 years. Get ready." And now we're just counting down the months until the mothership lands? We would feel a little more urgency than we do.
資訊科學家斯圖亞特·羅素 有個很好的比喻。 他說,想像我們收到一則 外星文明的訊息, 寫道: 「地球的人們, 我們 50 年內會到達你們的星球。 作好準備。」 而現在我們只是在倒數 外星母艦還剩幾個月登陸? 我們會比我們現在稍微感到緊迫。
Another reason we're told not to worry is that these machines can't help but share our values because they will be literally extensions of ourselves. They'll be grafted onto our brains, and we'll essentially become their limbic systems. Now take a moment to consider that the safest and only prudent path forward, recommended, is to implant this technology directly into our brains. Now, this may in fact be the safest and only prudent path forward, but usually one's safety concerns about a technology have to be pretty much worked out before you stick it inside your head.
另一個我們被告知不用擔心的原因 是這些機器不得不和我們 有一樣的價值觀, 因為他們實際上只是我們的延伸。 它們會被植入我們的大腦裡, 而我們基本上變成 他們大腦的邊緣系統。 現在用一點時間想想 這最安全而且唯一謹慎的往前的路, 被推薦的, 是將這個科技植入我們的腦內。 這也許的確是最安全 而且唯一謹慎的往前的路, 但通常科技的安全性問題 應該在把東西插到你腦袋裡之前 就該大部分解決了。
(Laughter)
(笑聲)
The deeper problem is that building superintelligent AI on its own seems likely to be easier than building superintelligent AI and having the completed neuroscience that allows us to seamlessly integrate our minds with it. And given that the companies and governments doing this work are likely to perceive themselves as being in a race against all others, given that to win this race is to win the world, provided you don't destroy it in the next moment, then it seems likely that whatever is easier to do will get done first.
更深層的問題是, 打造超級人工智能本身, 似乎相對容易於 打造超級人工智慧 並擁有完整的神經科學, 讓我們可以把我們的心智 無縫與之整合。 而假設正在從事人工智能 研發的許多公司和政府, 很可能察覺他們 正在和所有其他人競爭, 假設贏了這個競爭就是贏得世界, 假設你在下一刻不會毀了世界, 那麼很可能比較容易做的事 就會先被做完。
Now, unfortunately, I don't have a solution to this problem, apart from recommending that more of us think about it. I think we need something like a Manhattan Project on the topic of artificial intelligence. Not to build it, because I think we'll inevitably do that, but to understand how to avoid an arms race and to build it in a way that is aligned with our interests. When you're talking about superintelligent AI that can make changes to itself, it seems that we only have one chance to get the initial conditions right, and even then we will need to absorb the economic and political consequences of getting them right.
現在,很不幸地, 我沒有這個問題的解決方法, 除了建議我們更多人思考這個問題。 我想我們需要類似曼哈頓計畫的東西, 針對人工智能這個課題。 不是因為我們不可避免地 要這麼做而做, 而是試著理解如何避免軍備競賽, 而且用一種符合 我們利益的方式打造之。 當你在談論能夠對其本身 造成改變的超級人工智能, 這似乎說明我們只有一次機會 把初始條件做對, 而且我們會必須承受 為了將它們做對的經濟和政治後果。
But the moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is, and we admit that we will improve these systems continuously, and we admit that the horizon of cognition very likely far exceeds what we currently know, then we have to admit that we are in the process of building some sort of god. Now would be a good time to make sure it's a god we can live with.
但一旦我們承認 資訊處理是智能的源頭, 某些適當的電腦系統是智能的基礎, 而且我們承認我們會 持續改進這些系統, 而且我們承認認知的極限 有可能遠遠超越 我們目前所知, 我們就會承認 我們正在打造某種神明的過程裡。 現在是個好時機 來確保那是個我們能夠 與之共存的神明。
Thank you very much.
謝謝大家。
(Applause)