In the coming years, artificial intelligence is probably going to change your life, and likely the entire world. But people have a hard time agreeing on exactly how. The following are excerpts from a World Economic Forum interview where renowned computer science professor and AI expert Stuart Russell helps separate the sense from the nonsense.
在不久的將來,人工智慧 可能會改變你的人生, 還很可能改變全世界。 但對於改變的方式, 大家很難取得共識。 接下來的片段取自一場訪談, 知名資訊科學教授 及人工智慧專家史都華‧羅素 要來協助大家區分 什麼有理,什麼是胡說。
There’s a big difference between asking a human to do something and giving that as the objective to an AI system. When you ask a human to get you a cup of coffee, you don’t mean this should be their life’s mission, and nothing else in the universe matters. Even if they have to kill everybody else in Starbucks to get you the coffee before it closes— they should do that. No, that’s not what you mean. All the other things that we mutually care about, they should factor into your behavior as well.
要求一個人類做某件事情,很不同於 把那件事情設定為 人工智慧系統的目標。 當你請一個人類幫你弄一杯咖啡時, 你並不是要他把這件事 當作人生的使命, 且宇宙中其他一切都無所謂, 就算他得要把星巴克裡 所有人的殺光才能 在關門前幫你弄到 一杯咖啡,也應該去做。 不,那不是你的意思。 我們互相都在乎的所有其他因素 都應該納入行為時的考量。 而我們現在建造人工智慧 系統的方式,問題在於
And the problem with the way we build AI systems now is we give them a fixed objective. The algorithms require us to specify everything in the objective. And if you say, can we fix the acidification of the oceans? Yeah, you could have a catalytic reaction that does that extremely efficiently, but it consumes a quarter of the oxygen in the atmosphere, which would apparently cause us to die fairly slowly and unpleasantly over the course of several hours.
我們給系統固定的目標。 演算法需要我們把目標中的 一切都明確定義清楚。 如果你說,我們能否處理 海洋酸化的問題? 可以,可以找到一種 極有效的催化反應來達成, 但那會消耗掉大氣中 四分之一的氧氣, 很顯然那會讓我們死亡, 且過程長達七小時, 是緩慢又不愉快的死法。
So, how do we avoid this problem? You might say, okay, well, just be more careful about specifying the objective— don’t forget the atmospheric oxygen. And then, of course, some side effect of the reaction in the ocean poisons all the fish. Okay, well I meant don’t kill the fish either. And then, well, what about the seaweed? Don’t do anything that’s going to cause all the seaweed to die. And on and on and on.
所以我們要如何避免這個問題? 你可能會說,好,那就在訂 明確目標的時候更謹慎點—— 附註別忘了考量大氣中的氧氣。 接著,當然,這個反應 在海洋中的一些副作用 毒死了所有的魚類。 好,我的意思是也不能害死魚類。 那,水草呢? 別做任何會造成海草死亡的事。 以此一直類推下去。
And the reason that we don’t have to do that with humans is that humans often know that they don’t know all the things that we care about. If you ask a human to get you a cup of coffee, and you happen to be in the Hotel George Sand in Paris, where the coffee is 13 euros a cup, it’s entirely reasonable to come back and say, well, it’s 13 euros, are you sure you want it, or I could go next door and get one? And it’s a perfectly normal thing for a person to do. To ask, I’m going to repaint your house— is it okay if I take off the drainpipes and then put them back? We don't think of this as a terribly sophisticated capability, but AI systems don’t have it because the way we build them now, they have to know the full objective. If we build systems that know that they don’t know what the objective is, then they start to exhibit these behaviors, like asking permission before getting rid of all the oxygen in the atmosphere.
對人類就不需要這麼做,原因是因為 人類通常知道 我們在乎之事他們不全都知道。 如果你請人類幫你弄一杯咖啡, 而你剛好住在巴黎的喬治·桑酒店, 在那裡的咖啡一杯要價十三歐元。 非常合理的行為是回來跟你說, 要十三歐元,你確定你要嗎? 還是要我去隔壁買? 對人類來說,這樣做非常正常。 人類會問,我打算要 重新油漆你的房子—— 我先把排水管拿下來, 之後再放回去可以嗎? 我們不認為這是 非常精密複雜的能力, 但人工智慧系統做不到,因為 我們建造它們的方式 必須要給它們完整的目標。 如果我們建立的系統能知道 它們不知道目標是什麼, 那麼它們就會展現這些行為, 比如在把大氣中所有氧氣 都除掉之前會先問可不可以。
In all these senses, control over the AI system comes from the machine’s uncertainty about what the true objective is. And it’s when you build machines that believe with certainty that they have the objective, that’s when you get this sort of psychopathic behavior. And I think we see the same thing in humans.
在所有這些意義上, 對人工智慧系統的掌控 來自於機器無法完全確定 真正的目標是什麼。 當你建造出的機器 非常堅信它們有目標時, 就會出現這種精神錯亂的行為。 我想,在人類身上也能看到這現象。
What happens when general purpose AI hits the real economy? How do things change? Can we adapt? This is a very old point. Amazingly, Aristotle actually has a passage where he says, look, if we had fully automated weaving machines and plectrums that could pluck the lyre and produce music without any humans, then we wouldn’t need any workers.
當通用人工智慧和真實經濟 碰撞時,會發生什麼事? 會有什麼改變?我們能適應嗎? 這是個古老的論點。 讓人驚奇的是,亞里斯多德 有段話是這樣說的: 如果我們有完全自動化的編織機器 和弦撥,不用人類就可以 撥里拉琴和製作音樂, 那我們就不需要任何工人。
That idea, which I think it was Keynes who called it technological unemployment in 1930, is very obvious to people. They think, yeah, of course, if the machine does the work, then I'm going to be unemployed.
我認為那個想法是凱因斯的, 他在 1930 年稱之為技術性失業, 對大家來說這想法很明顯。 他們會想,當然, 如果機器能做這些事, 那我就會失業。
You can think about the warehouses that companies are currently operating for e-commerce, they are half automated. The way it works is that an old warehouse— where you’ve got tons of stuff piled up all over the place and humans go and rummage around and then bring it back and send it off— there’s a robot who goes and gets the shelving unit that contains the thing that you need, but the human has to pick the object out of the bin or off the shelf, because that’s still too difficult. But, at the same time, would you make a robot that is accurate enough to be able to pick pretty much any object within a very wide variety of objects that you can buy? That would, at a stroke, eliminate 3 or 4 million jobs?
可以想想目前企業為了 電子商務而經營的倉庫, 它們是半自動化的。 老式倉庫的運作方式是: 一大堆東西到處堆疊, 人類在倉庫中到處翻找, 再把貨拿去寄出, 是有機器人會去取得 你要找的東西所屬的那個儲存單位, 但要由人類把物品 從箱子中取出或從架上取下, 因為那仍然太困難。 但,同時, 你是否會做一個夠精確的機器人, 能夠從你能購買的各種物品中 挑選出任何物品? 那會不會一下子就消滅了 三、四百萬個工作?
There's an interesting story that E.M. Forster wrote, where everyone is entirely machine dependent. The story is really about the fact that if you hand over the management of your civilization to machines, you then lose the incentive to understand it yourself or to teach the next generation how to understand it. You can see “WALL-E” actually as a modern version, where everyone is enfeebled and infantilized by the machine, and that hasn’t been possible up to now.
艾德華‧摩根‧佛斯特 寫了一個有趣的故事, 故事中,人人都完全仰賴機器。 故事要談的事實是, 如果你把文明的管理權交給機器, 就不會有動機刺激你 自己去了解它, 或教下一代如何了解它。 可以看到《瓦力》其實就是現代版, 機器讓所有人都很軟弱、 被當孩子對待, 目前為止還不可能發生。
We put a lot of our civilization into books, but the books can’t run it for us. And so we always have to teach the next generation. If you work it out, it’s about a trillion person years of teaching and learning and an unbroken chain that goes back tens of thousands of generations. What happens if that chain breaks?
我們的文明有很多 都被我們放到書中, 但書無法為我們經營文明。 所以我們一直得要教導下一代。 算一算,是大約一兆人年的教與學, 以及一條完整的鏈, 可追溯到數萬個世代以前。 如果那條鏈斷了呢?
I think that’s something we have to understand as AI moves forward. The actual date of arrival of general purpose AI— you’re not going to be able to pinpoint, it isn’t a single day. It’s also not the case that it’s all or nothing. The impact is going to be increasing. So with every advance in AI, it significantly expands the range of tasks.
我認為那是隨著人工智慧 向前邁進時我們該去了解的。 通用人工智慧出現的確切日期—— 無法明確指出來, 那並不是單一個日子。 現實也不是全有或全無。 衝擊會漸漸增大。 每當人工智慧有所進展, 就會讓工作任務的範圍顯著再擴大。
So in that sense, I think most experts say by the end of the century, we’re very, very likely to have general purpose AI. The median is something around 2045. I'm a little more on the conservative side. I think the problem is harder than we think.
就那方面來說,我想, 大部分的專家說 到了這個世紀末 我們非常非常有可能 會有通用人工智慧。 中位數大概會在 2045 年。 我比較偏向保守派一點, 我認為問題比我們想的更困難。 我很喜歡人工智慧創造者 約翰‧麥克菲的說法,
I like what John McAfee, he was one of the founders of AI, when he was asked this question, he said, somewhere between five and 500 years. And we're going to need, I think, several Einsteins to make it happen.
被問到這個問題時,他說 是在五年和五百年之間。 我想,我們需要好幾個 愛因斯坦才能實現。