When I was a kid, I was the quintessential nerd. I think some of you were, too.
我小時候是個典型的書呆子。 你們有些人也曾經是書呆子。
(Laughter)
(笑聲)
And you, sir, who laughed the loudest, you probably still are.
那位笑最大聲的也許現在還是書呆子。
(Laughter)
(笑聲)
I grew up in a small town in the dusty plains of north Texas, the son of a sheriff who was the son of a pastor. Getting into trouble was not an option. And so I started reading calculus books for fun.
我在北德州一座塵土飛揚的 平原小鎮長大, 父親是警長,祖父是牧師。 所以我「惹上麻煩」是不可能的事。 因此我開始讀微積分的書當消遣。
(Laughter)
(笑聲)
You did, too. That led me to building a laser and a computer and model rockets, and that led me to making rocket fuel in my bedroom. Now, in scientific terms, we call this a very bad idea.
你也讀過。 這引導我去製作雷射、 電腦和火箭模型, 然後我在臥室裡製造火箭推進燃料。 如果用科學上的說法來形容, 我們把這叫做「糟糕透頂的主意」。
(Laughter)
(笑聲)
Around that same time, Stanley Kubrick's "2001: A Space Odyssey" came to the theaters, and my life was forever changed. I loved everything about that movie, especially the HAL 9000. Now, HAL was a sentient computer designed to guide the Discovery spacecraft from the Earth to Jupiter. HAL was also a flawed character, for in the end he chose to value the mission over human life. Now, HAL was a fictional character, but nonetheless he speaks to our fears, our fears of being subjugated by some unfeeling, artificial intelligence who is indifferent to our humanity.
大概在那段時期, 史丹利·庫柏力克執導的 《2001 太空漫遊》上映了, 我的人生也永遠地改變了。 我愛上了這部電影裡的一切, 尤其是「豪爾-9000」。 豪爾是一台知覺電腦, 設計來引導「發現號」太空船, 從地球前往木星。 豪爾也是個有缺陷的角色, 因為它最終選擇 任務優先、人命其次。 儘管豪爾只是個虛構的角色, 卻直指我們內心的恐懼, 我們害怕被征服的恐懼, 臣服於某個沒有情感, 對於人類漠不關心的人工智能電腦。
I believe that such fears are unfounded. Indeed, we stand at a remarkable time in human history, where, driven by refusal to accept the limits of our bodies and our minds, we are building machines of exquisite, beautiful complexity and grace that will extend the human experience in ways beyond our imagining.
我認為這種恐懼只是杞人憂天。 沒錯,我們現在處於 人類史上一個偉大的時代。 我們拒絕接受肉體和心靈上的限制, 我們製造細緻、精美,複雜 又優雅的機器。 這些機器將透過各種超乎想像的方式, 拓展人類的經驗範圍。
After a career that led me from the Air Force Academy to Space Command to now, I became a systems engineer, and recently I was drawn into an engineering problem associated with NASA's mission to Mars. Now, in space flights to the Moon, we can rely upon mission control in Houston to watch over all aspects of a flight. However, Mars is 200 times further away, and as a result it takes on average 13 minutes for a signal to travel from the Earth to Mars. If there's trouble, there's not enough time. And so a reasonable engineering solution calls for us to put mission control inside the walls of the Orion spacecraft. Another fascinating idea in the mission profile places humanoid robots on the surface of Mars before the humans themselves arrive, first to build facilities and later to serve as collaborative members of the science team.
我曾任職於美國空軍學院, 現在服務於美國空軍太空司令部。 我成了系統工程師, 最近我被派去解決一個 與美國太空總署的 火星任務有關的工程問題。 目前,前往月球的太空航行, 我們可以仰賴位於休士頓的控制中心 來監控這段旅程的所有層面。 然而,火星的距離比月球遠了200倍, 一個訊號平均要花 13 分鐘, 才能從地球傳送到火星。 如果發生了任何狀況, 根本不夠時間解決。 所以一個合理的工程解決方案, 就是把控制中心的位置, 放在「獵戶座」太空船裡面。 在任務方面,還有另一個絕妙的點子, 就是提早在人類之前抵達前, 先在火星表面部署人型機器人。 它們首先建造設備, 以後擔任科學小組的協助角色。
Now, as I looked at this from an engineering perspective, it became very clear to me that what I needed to architect was a smart, collaborative, socially intelligent artificial intelligence. In other words, I needed to build something very much like a HAL but without the homicidal tendencies.
當我從工程師的角度來看這件事, 很明顯我需要做的 就是製作一個聰明、善於合作、 具備社交智能的人工智能電腦。 換句話說,我需要製作一個很像豪爾, 但是沒有殺人的癖好的電腦。
(Laughter)
(笑聲)
Let's pause for a moment. Is it really possible to build an artificial intelligence like that? Actually, it is. In many ways, this is a hard engineering problem with elements of AI, not some wet hair ball of an AI problem that needs to be engineered. To paraphrase Alan Turing, I'm not interested in building a sentient machine. I'm not building a HAL. All I'm after is a simple brain, something that offers the illusion of intelligence.
讓我們暫停一下。 真的有可能,製作出那樣的 人工智慧電腦嗎? 確實有可能。 就許多方面來說, 這是一個困難的工程問題, 其中包含了人工智慧的元素, 而不是一個難解的人工智能問題。 就如同艾倫·圖靈所說的, 我沒有興趣製造一台情識的機器。 我所製作的並不是豪爾, 我想要的是一個簡單的大腦, 它能夠營造出「具備智慧」的錯覺。
The art and the science of computing have come a long way since HAL was onscreen, and I'd imagine if his inventor Dr. Chandra were here today, he'd have a whole lot of questions for us. Is it really possible for us to take a system of millions upon millions of devices, to read in their data streams, to predict their failures and act in advance? Yes. Can we build systems that converse with humans in natural language? Yes. Can we build systems that recognize objects, identify emotions, emote themselves, play games and even read lips? Yes. Can we build a system that sets goals, that carries out plans against those goals and learns along the way? Yes. Can we build systems that have a theory of mind? This we are learning to do. Can we build systems that have an ethical and moral foundation? This we must learn how to do. So let's accept for a moment that it's possible to build such an artificial intelligence for this kind of mission and others.
在豪爾登台亮相之後, 藝術和計算科學已有了長足的進步。 我想如果它的發明者 強德拉(Chandra)博士今天也在現場, 他會有很多問題要問我們。 我們是否真的有可能 讓一個擁有無數元件的系統, 去讀自己的數據流, 然後預測故障以及提前預防? 這是可能的。 我們可否做出可使用自然語言 交談的系統? 可以。 我們可否做出能夠 辨別物體、辨識情緒、 模擬情緒、玩遊戲, 甚至能讀唇語的系統? 可以。 我們可否做出一個能夠設定目標, 並一面執行朝向目標的計劃, 同時在過程中學習的系統? 可以。 我們可否做出具備心智理論的系統? 這個目前我們還在學習做。 我們可否做出擁有 倫理與道德基礎的系統? 這個我們必須摸索如何做。 所以讓我們姑且相信 建造這樣的人工智能系統有可能成真, 以用在諸如此類和其他的任務。
The next question you must ask yourself is, should we fear it? Now, every new technology brings with it some measure of trepidation. When we first saw cars, people lamented that we would see the destruction of the family. When we first saw telephones come in, people were worried it would destroy all civil conversation. At a point in time we saw the written word become pervasive, people thought we would lose our ability to memorize. These things are all true to a degree, but it's also the case that these technologies brought to us things that extended the human experience in some profound ways.
接下來我們必須捫心自問, 我們是否應該感到害怕? 誠然,每一項新的科技 都會帶來某種程度的擔憂。 當汽車問世的時候, 眾人都在哀嘆家庭可能因此而毀滅。 當電話問世的時候, 眾人擔心日常的對話會不復存在。 當書面文字風行的時候, 眾人以為我們會失去記憶的能力。 這些擔憂在某種程度上都有所根據, 但同時這些科技 以某些深刻的方式, 帶給我們許多拓展人類經驗的事物。
So let's take this a little further. I do not fear the creation of an AI like this, because it will eventually embody some of our values. Consider this: building a cognitive system is fundamentally different than building a traditional software-intensive system of the past. We don't program them. We teach them. In order to teach a system how to recognize flowers, I show it thousands of flowers of the kinds I like. In order to teach a system how to play a game -- Well, I would. You would, too. I like flowers. Come on. To teach a system how to play a game like Go, I'd have it play thousands of games of Go, but in the process I also teach it how to discern a good game from a bad game. If I want to create an artificially intelligent legal assistant, I will teach it some corpus of law but at the same time I am fusing with it the sense of mercy and justice that is part of that law. In scientific terms, this is what we call ground truth, and here's the important point: in producing these machines, we are therefore teaching them a sense of our values. To that end, I trust an artificial intelligence the same, if not more, as a human who is well-trained.
讓我們再繼續探討。 我並不懼怕創造 像這樣的人工智能系統, 因為它最終會 體現我們的一些價值觀。 思考一下:建造認知系統 與過去建造傳統 軟體密集型的系統根本不同。 我們不寫電腦程式。我們教它們。 為了教導系統如何辨識花, 我給它們看數以千計 我喜歡的花的圖片。 為了教系統如何玩遊戲-- 我會,你也會。 我喜歡花。你也是吧! 為了教系統如何玩遊戲,例如圍棋, 我會讓它一面下數千局圍棋, 也在下棋的過程中 教它如何分辨好的、不好的棋局。 如果我要造個人工智能的法律助理, 我會教它一些法律, 同時我會將它與憐憫的感覺 和法律正義融合在一起。 以科學術語方面, 這就是我們所謂的真理, 重點來了: 在製造這些機器時, 我們因此教它們我們的價值感。 為此,我相信人工智能 至少不會輸給訓練有素的人。
But, you may ask, what about rogue agents, some well-funded nongovernment organization? I do not fear an artificial intelligence in the hand of a lone wolf. Clearly, we cannot protect ourselves against all random acts of violence, but the reality is such a system requires substantial training and subtle training far beyond the resources of an individual. And furthermore, it's far more than just injecting an internet virus to the world, where you push a button, all of a sudden it's in a million places and laptops start blowing up all over the place. Now, these kinds of substances are much larger, and we'll certainly see them coming.
但是,你可能問, 那些流氓代理, 那些資金豐富的非政府組織呢? 我不擔心獨狼(獨行俠) 手上的人工智能。 很顯然,我們無法防禦 所有隨機的暴力行為, 但是現實是這種系統, 需要大量的訓練和微妙的訓練, 非個人的資源所能及。 此外, 遠遠超出像把互聯網病毒注入世界, 只要按個按鈕, 頃刻之間病毒就會散播各處, 所有地方的筆電開始當機。 這些實質要大得多, 我們確定會看到它們的到來。
Do I fear that such an artificial intelligence might threaten all of humanity? If you look at movies such as "The Matrix," "Metropolis," "The Terminator," shows such as "Westworld," they all speak of this kind of fear. Indeed, in the book "Superintelligence" by the philosopher Nick Bostrom, he picks up on this theme and observes that a superintelligence might not only be dangerous, it could represent an existential threat to all of humanity. Dr. Bostrom's basic argument is that such systems will eventually have such an insatiable thirst for information that they will perhaps learn how to learn and eventually discover that they may have goals that are contrary to human needs. Dr. Bostrom has a number of followers. He is supported by people such as Elon Musk and Stephen Hawking. With all due respect to these brilliant minds, I believe that they are fundamentally wrong. Now, there are a lot of pieces of Dr. Bostrom's argument to unpack, and I don't have time to unpack them all, but very briefly, consider this: super knowing is very different than super doing. HAL was a threat to the Discovery crew only insofar as HAL commanded all aspects of the Discovery. So it would have to be with a superintelligence. It would have to have dominion over all of our world. This is the stuff of Skynet from the movie "The Terminator" in which we had a superintelligence that commanded human will, that directed every device that was in every corner of the world. Practically speaking, it ain't gonna happen. We are not building AIs that control the weather, that direct the tides, that command us capricious, chaotic humans. And furthermore, if such an artificial intelligence existed, it would have to compete with human economies, and thereby compete for resources with us. And in the end -- don't tell Siri this -- we can always unplug them.
我擔心這樣的人工智能 會威脅全體人類嗎? 電影如《駭客任務》、 《大都會》、《魔鬼終結者》, 電視劇像是《西方極樂園》, 都是在談這種恐懼。 的確,在哲學家尼克· 博斯特倫 寫的 《超級智能》書裡, 他以此主題, 主張超智能不僅危險, 還可能威脅全人類的存亡。 博斯特倫博士的基本論點是: 這種系統最終 將會不屈不撓地渴望資訊, 它們或許會學習到如何學習的方法, 最終發現它們的目標 可能與人類的背道而馳。 博斯特倫博士有不少追隨者。 他得到伊隆·馬斯克和 史蒂芬·霍金等人的支持。 我非常尊重 這些非常聰明的人, 但是我相信他們從根本上就是錯誤的, 博斯格羅姆博士的許多說法 需要被詳細分析, 但在此我沒有時間分別解說。 簡要的說,就考慮這個: 超智能與超執行完全是兩回事。 只有當豪爾全面掌控「發現號」時, 才對「發現號」的組員造成威脅。 所以它就必須是有超智慧。 它必須擁有對我們全世界的統治權。 這是電影《魔鬼終結者》 裡的 「天網」所具有的。 它有超智能 來指揮的人的意志, 來操控世界每一個角落的每個設備。 實際上, 這不會發生。 我們不是製造人工智能來控制氣候、 控制海潮, 指揮反覆無常和混亂的人類。 此外,如果這樣的人工智能真的存在, 它必須與人類的經濟競爭, 從而與我們競爭資源。 最後, 不要告訴 Siri 這個—— 我們可以隨時拔掉它們的插頭。
(Laughter)
(笑聲)
We are on an incredible journey of coevolution with our machines. The humans we are today are not the humans we will be then. To worry now about the rise of a superintelligence is in many ways a dangerous distraction because the rise of computing itself brings to us a number of human and societal issues to which we must now attend. How shall I best organize society when the need for human labor diminishes? How can I bring understanding and education throughout the globe and still respect our differences? How might I extend and enhance human life through cognitive healthcare? How might I use computing to help take us to the stars?
我們正處於與我們的機器共同演化的 一個難以置信的旅程。 今天的人類 不是屆時的人類。 現在擔心超級智能的出現, 會使我們在許多方面危險地分心, 因為計算機本身的興起, 帶來的許多人類和社會問題, 我們現在就必須解決。 當人力的需要逐漸減少時, 我們如何重整這個社會? 我該如何為全球帶來理解和教育 而仍然尊重我們彼此的分歧? 我們如何通過認知保健 延伸和增強人的生命? 我如何使用計算技術 來幫助我們去到其他星球?
And that's the exciting thing. The opportunities to use computing to advance the human experience are within our reach, here and now, and we are just beginning.
這是令人興奮的事。 透過計算技術的運用, 來拓展人類經驗的機會, 就在我們眼前。 此時、此刻, 我們才正要起步而已。
Thank you very much.
非常感謝各位。
(Applause)
(掌聲)