When I was a kid, I was the quintessential nerd. I think some of you were, too.
我細個嗰陣係個書蟲 我估你哋一啲人都係
(Laughter)
(笑聲)
And you, sir, who laughed the loudest, you probably still are.
仲有你呀,阿生 笑得最大聲嗰個,睇嚟宜家都仲係
(Laughter)
(笑聲)
I grew up in a small town in the dusty plains of north Texas, the son of a sheriff who was the son of a pastor. Getting into trouble was not an option. And so I started reading calculus books for fun.
我喺德州北部平原一個鎮仔大嘅 我阿爺係牧師,我老豆係警察 所以自細就係乖乖仔 睇微積分書當娛樂
(Laughter)
(笑聲)
You did, too. That led me to building a laser and a computer and model rockets, and that led me to making rocket fuel in my bedroom. Now, in scientific terms, we call this a very bad idea.
你都係咧 於是我自己整 激光啦、電腦啦、火箭模型啦 最後仲喺房裏邊整埋火箭燃料 用我哋宜家科學嘅講法 係搞搞震無幫襯
(Laughter)
(笑聲)
Around that same time, Stanley Kubrick's "2001: A Space Odyssey" came to the theaters, and my life was forever changed. I loved everything about that movie, especially the HAL 9000. Now, HAL was a sentient computer designed to guide the Discovery spacecraft from the Earth to Jupiter. HAL was also a flawed character, for in the end he chose to value the mission over human life. Now, HAL was a fictional character, but nonetheless he speaks to our fears, our fears of being subjugated by some unfeeling, artificial intelligence who is indifferent to our humanity.
就喺嗰陣 Stanley Kubrick 拍嗰部 《2001太空漫遊》上畫 我嘅生活從此改寫 我係部片嘅忠實擁躉 特別中意哈爾 9000 哈爾係一部機械人 專為太空船導航而設計 指引太空船由地球飛往木星 但哈爾都有缺點 佢最後為實現目的而罔顧人類安全 哈爾雖然係虛構 但佢反映咗我哋嘅恐懼 就係怕無感情嘅人工智能 以後會操控人類
I believe that such fears are unfounded. Indeed, we stand at a remarkable time in human history, where, driven by refusal to accept the limits of our bodies and our minds, we are building machines of exquisite, beautiful complexity and grace that will extend the human experience in ways beyond our imagining.
我認為呢種恐懼唔成立 講真,人類呢一刻嘅歷史 可以話係最輝煌嘅 我哋冇向身體同大腦限制低頭 反而製造出精良、複雜又美觀嘅機器 協助人類逹至超乎想象嘅領域
After a career that led me from the Air Force Academy to Space Command to now, I became a systems engineer, and recently I was drawn into an engineering problem associated with NASA's mission to Mars. Now, in space flights to the Moon, we can rely upon mission control in Houston to watch over all aspects of a flight. However, Mars is 200 times further away, and as a result it takes on average 13 minutes for a signal to travel from the Earth to Mars. If there's trouble, there's not enough time. And so a reasonable engineering solution calls for us to put mission control inside the walls of the Orion spacecraft. Another fascinating idea in the mission profile places humanoid robots on the surface of Mars before the humans themselves arrive, first to build facilities and later to serve as collaborative members of the science team.
我以前喺空軍學校同太空指揮部做嘢嘅 咁而家,我做咗系統工程師 最近我著手於美國太空總署嘅 火星任務嘅一個工程問題 到目前為止,所有上月球嘅太空船 我哋都可以留喺休斯頓控制中心控制 不過,火星比月球遠 200 倍 於是乎由地球傳送到火星嘅訊號 平均要花 13 分鐘先可以送到 所以如果中途出咗咩問題 我哋都唔夠時間解決 所以解決方法係 我哋要將任務控制台 裝喺太空船獵戶號嘅墻板裏邊 另一個方法係 喺人類登陸之前 喺火星表面放置一個機械人 機械人前期負責興建設施 後期就加入科學團隊擔當協助
Now, as I looked at this from an engineering perspective, it became very clear to me that what I needed to architect was a smart, collaborative, socially intelligent artificial intelligence. In other words, I needed to build something very much like a HAL but without the homicidal tendencies.
而當我企喺工程師嘅角度嚟睇 我好清楚我要打造嘅 係一個高智能、富團隊精神 同善於交際嘅人工智能 換句話嚟講,我要整到好似哈爾咁樣 不過係唔會當人類 係冚家鏟要消滅嘅機械人
(Laughter)
(笑聲)
Let's pause for a moment. Is it really possible to build an artificial intelligence like that? Actually, it is. In many ways, this is a hard engineering problem with elements of AI, not some wet hair ball of an AI problem that needs to be engineered. To paraphrase Alan Turing, I'm not interested in building a sentient machine. I'm not building a HAL. All I'm after is a simple brain, something that offers the illusion of intelligence.
等陣先 咁係咪真係有可能整個咁嘅機械人先? 其實,係可以嘅 好多時,設計人工智能元素係好難嘅 咁唔係講緊整機械人頭髮 學 Alan Turning 話齋 我無興趣整隻有感覺嘅機器 亦都唔係生產另一個哈爾 我所追求嘅係一個會簡單思考 同有少少智慧嘅機械人
The art and the science of computing have come a long way since HAL was onscreen, and I'd imagine if his inventor Dr. Chandra were here today, he'd have a whole lot of questions for us. Is it really possible for us to take a system of millions upon millions of devices, to read in their data streams, to predict their failures and act in advance? Yes. Can we build systems that converse with humans in natural language? Yes. Can we build systems that recognize objects, identify emotions, emote themselves, play games and even read lips? Yes. Can we build a system that sets goals, that carries out plans against those goals and learns along the way? Yes. Can we build systems that have a theory of mind? This we are learning to do. Can we build systems that have an ethical and moral foundation? This we must learn how to do. So let's accept for a moment that it's possible to build such an artificial intelligence for this kind of mission and others.
自從哈爾喺電影出現之後 計算科學已經發展咗好多 我想像得到,如果發明佢嘅 Chandra 博士今日喺度嘅話 佢實有好多問題問我哋 我哋有冇可能 從數以億計嘅設備裡面讀取數據 同時預測機器犯嘅錯誤 同埋及早更正呢? 可以 我哋可唔可以 整隻機械人出嚟識講人話? 可以 我哋可唔可以整隻機械人 識得識別物體、辨別情感 自帶情感、打機,甚至識讀唇語? 可以 我哋可唔可以整隻機械人識訂立目標 實現目標兼且喺過程中自學? 可以 我哋可唔可以整隻 有思維邏輯嘅機械人? 我哋宜家嘗試緊 我哋可唔可以整隻機械人 明白道德觀念同底線? 呢個任務我哋責無旁貸 咁姑且我哋有可能 為呢個任務或者其他任務 整隻咁樣嘅機械人
The next question you must ask yourself is, should we fear it? Now, every new technology brings with it some measure of trepidation. When we first saw cars, people lamented that we would see the destruction of the family. When we first saw telephones come in, people were worried it would destroy all civil conversation. At a point in time we saw the written word become pervasive, people thought we would lose our ability to memorize. These things are all true to a degree, but it's also the case that these technologies brought to us things that extended the human experience in some profound ways.
跟住你實會問 咁嘅人工智能會唔會造成威脅? 時至今日,每項新科技面世 唔多唔少都會帶嚟不安 以前啲人第一次見到汽車時 就驚車禍會造成家破人亡 以前第一次見到電話時 啲人就驚人同人之間嘅交流會受到破壞 曾幾何時啲人見到文字可以傳送 又驚人類嘅記憶力會喪失 呢啲不安喺一定程度上嚟講無錯 但同時呢啲科技 拓闊咗人類嘅體驗
So let's take this a little further. I do not fear the creation of an AI like this, because it will eventually embody some of our values. Consider this: building a cognitive system is fundamentally different than building a traditional software-intensive system of the past. We don't program them. We teach them. In order to teach a system how to recognize flowers, I show it thousands of flowers of the kinds I like. In order to teach a system how to play a game -- Well, I would. You would, too. I like flowers. Come on. To teach a system how to play a game like Go, I'd have it play thousands of games of Go, but in the process I also teach it how to discern a good game from a bad game. If I want to create an artificially intelligent legal assistant, I will teach it some corpus of law but at the same time I am fusing with it the sense of mercy and justice that is part of that law. In scientific terms, this is what we call ground truth, and here's the important point: in producing these machines, we are therefore teaching them a sense of our values. To that end, I trust an artificial intelligence the same, if not more, as a human who is well-trained.
我哋不如再講遠啲 我唔驚呢啲人工智能面世 因為佢最終會接受人類嘅一啲價值 試下咁唸:製造感知機械人 同製造以前傳統嘅軟件密集型機械人 係有根本性分別 我哋而家唔係編程機械人 我哋係教機械人 為咗教機械人識別花 我攞幾千種我鍾意嘅花畀佢睇 為咗教曉機械人點打機—— 我真係教㗎。你都會咁做 我真係鍾意花呢—— 為咗教識機械人打「殺出重圍」 我畀佢打幾千局遊戲 不過呢個過程,我又會教佢 辨別好局同劣局 如果我想整隻法律助理機械人 我除咗會教佢法律 我仲會教佢法律嘅寛容同公義 套用科學術語 呢啲係我哋所謂嘅參考標準 而且重要嘅係: 製造呢啲機械人時 我哋將自己嘅價值觀灌輸畀佢 最終,我相信人工智能 會同一個受過專業訓練嘅人一樣
But, you may ask, what about rogue agents, some well-funded nongovernment organization? I do not fear an artificial intelligence in the hand of a lone wolf. Clearly, we cannot protect ourselves against all random acts of violence, but the reality is such a system requires substantial training and subtle training far beyond the resources of an individual. And furthermore, it's far more than just injecting an internet virus to the world, where you push a button, all of a sudden it's in a million places and laptops start blowing up all over the place. Now, these kinds of substances are much larger, and we'll certainly see them coming.
不過,你哋可能會問 如果呢啲人工智能被非法分子利用呢? 雖然我哋唔可能杜絕所有暴力事件發生 但我唔擔心人工智能落入一啲壞人手中 因為人工智能需要持續同細微嘅改進 單憑個人資源唔使旨意做到 而且,唔好當成植入網絡病毒咁簡單 唔好以為隨時隨地㩒個掣 全世界嘅電腦瞬間就會爆炸 人工智能係複雜好多嘅嘢 但我相信遲早有一日人工智能會出現
Do I fear that such an artificial intelligence might threaten all of humanity? If you look at movies such as "The Matrix," "Metropolis," "The Terminator," shows such as "Westworld," they all speak of this kind of fear. Indeed, in the book "Superintelligence" by the philosopher Nick Bostrom, he picks up on this theme and observes that a superintelligence might not only be dangerous, it could represent an existential threat to all of humanity. Dr. Bostrom's basic argument is that such systems will eventually have such an insatiable thirst for information that they will perhaps learn how to learn and eventually discover that they may have goals that are contrary to human needs. Dr. Bostrom has a number of followers. He is supported by people such as Elon Musk and Stephen Hawking. With all due respect to these brilliant minds, I believe that they are fundamentally wrong. Now, there are a lot of pieces of Dr. Bostrom's argument to unpack, and I don't have time to unpack them all, but very briefly, consider this: super knowing is very different than super doing. HAL was a threat to the Discovery crew only insofar as HAL commanded all aspects of the Discovery. So it would have to be with a superintelligence. It would have to have dominion over all of our world. This is the stuff of Skynet from the movie "The Terminator" in which we had a superintelligence that commanded human will, that directed every device that was in every corner of the world. Practically speaking, it ain't gonna happen. We are not building AIs that control the weather, that direct the tides, that command us capricious, chaotic humans. And furthermore, if such an artificial intelligence existed, it would have to compete with human economies, and thereby compete for resources with us. And in the end -- don't tell Siri this -- we can always unplug them.
我驚唔驚人工智能會威脅全人類? 電影《黑客帝國》、《大都會》 《終結者》、電視劇《西部世界》 呢類影視作品都係刻畫呢種恐懼 無錯,哲學家 Nick Bostrom 喺《超人工智能》一書裏邊都認為 超人工智能唔單止危險 而且仲危及全人類 佢嘅基本論點有︰ 咁樣嘅機械人 最終唔會滿足於眼前擁有嘅資訊 機械人可能會因而自己鑽研學習方法 以至到最後發現 自己有啲目標同人類需要有矛盾 有人支持博森博士嘅觀點 包括 Elon Musk 同霍金 我想指出其實幾位智者諗錯咗 Nick Bostrom 嘅理論有好多錯誤 但我無時間講曬所有 但簡單嚟講,可以咁理解: 超智能唔代表超萬能 哈爾對於成個太空探索團隊嘅威脅 僅限於佢可以對探索任務落命令 所以任務需要一個超智能機器 落命令嘅需要有統治世界嘅能力 電影《未來戰士 2018》 裡面嘅 Skynet 嗰個可以操控人類意志嘅 超人工智能防禦系統 控制曬全世界所有嘅機器同裝置 即係話 電影情節係唔會發生 我哋唔係整隻機械人出嚟呼風喚雨 操控喜怒無常、陷於鬥爭嘅人類 再者,如果咁樣嘅機械人真係存在 佢就會同人類嘅經濟鬥過 甚至同人類爭資源 最終結果係—— 唔好話俾 Siri 聽—— 我哋可以隨時熄咗佢哋
(Laughter)
(笑聲)
We are on an incredible journey of coevolution with our machines. The humans we are today are not the humans we will be then. To worry now about the rise of a superintelligence is in many ways a dangerous distraction because the rise of computing itself brings to us a number of human and societal issues to which we must now attend. How shall I best organize society when the need for human labor diminishes? How can I bring understanding and education throughout the globe and still respect our differences? How might I extend and enhance human life through cognitive healthcare? How might I use computing to help take us to the stars?
我哋同機械人係共同進化 未來嘅人類同今日嘅我哋 唔可以同日而語 人類擔心人工智能帶嚟威脅 只會令人類唔去真正關心 科技崛起帶嚟嘅人文同社會問題 而呢啲問題正正係 我哋需要著手解決嘅 問題例如有︰ 當我哋唔再需要勞動力嘅時候 我哋要點去調控社會? 點樣向全世界傳播知識同教育 同時又尊重當地嘅差異? 點樣通過認知式保健幫人類延年益壽? 點樣利用電腦幫人類踏足外太空?
And that's the exciting thing. The opportunities to use computing to advance the human experience are within our reach, here and now, and we are just beginning.
諗下都覺得興奮 利用計算科學去開拓人類經歷嘅機會 就喺手裏邊 而我哋只係啱啱捉緊到
Thank you very much.
多謝大家
(Applause)
(掌聲)