So, artificial intelligence is known for disrupting all kinds of industries. What about ice cream? What kind of mind-blowing new flavors could we generate with the power of an advanced artificial intelligence? So I teamed up with a group of coders from Kealing Middle School to find out the answer to this question. They collected over 1,600 existing ice cream flavors, and together, we fed them to an algorithm to see what it would generate. And here are some of the flavors that the AI came up with.
人工智慧 顛覆各產業為人所知。 那冰淇淋呢? 有了先進的人工智慧, 我們能變出什麼驚人的新口味? 我和基林中學的 一組程式設計師合作, 想找出這個問題的答案。 他們收集了既有的 一千六百種冰淇淋口味, 將這些資料輸入到演算法中, 看看會產出什麼。 以下是人工智慧想出來的一些口味。
[Pumpkin Trash Break]
〔南瓜垃圾〕
(Laughter)
(笑聲)
[Peanut Butter Slime]
〔花生醬黏液〕
[Strawberry Cream Disease]
〔草莓奶油疾病〕
(Laughter)
(笑聲)
These flavors are not delicious, as we might have hoped they would be. So the question is: What happened? What went wrong? Is the AI trying to kill us? Or is it trying to do what we asked, and there was a problem?
這些口味並不如我們預期的可口。 所以問題是:到底怎麼了? 哪裡出問題了? 人工智慧想要害死我們嗎? 或它只是照我們的指示去做, 卻出了問題?
In movies, when something goes wrong with AI, it's usually because the AI has decided that it doesn't want to obey the humans anymore, and it's got its own goals, thank you very much. In real life, though, the AI that we actually have is not nearly smart enough for that. It has the approximate computing power of an earthworm, or maybe at most a single honeybee, and actually, probably maybe less. Like, we're constantly learning new things about brains that make it clear how much our AIs don't measure up to real brains. So today's AI can do a task like identify a pedestrian in a picture, but it doesn't have a concept of what the pedestrian is beyond that it's a collection of lines and textures and things. It doesn't know what a human actually is. So will today's AI do what we ask it to do? It will if it can, but it might not do what we actually want.
在電影中,人工智慧如果出問題, 通常都是因為人工智慧決定 不要繼續服從人類了, 它有自己的目標,非常謝謝。 不過,在我們現實生活中的人工智慧 並沒有聰明到能做出那樣的事。 它大概只有蚯蚓程度的計算能力, 或頂多到一隻蜜蜂的程度, 其實,可能還更低。 對於大腦我們不斷有新的發現, 讓我們更清楚知道, 人工智慧遠遠比不上真實大腦。 所以,現今的人工智慧可以做到 在圖片中辨識出行人之類的事, 但它不知道什麼是行人, 只知道行人是許多 線條、結構、東西的組合。 它不知道人類是什麼。 所以,現今的人工智慧 會照我們要求的做嗎? 如果能的話,它會, 但它可能不會照我們 真正想要它做的去做。
So let's say that you were trying to get an AI to take this collection of robot parts and assemble them into some kind of robot to get from Point A to Point B. Now, if you were going to try and solve this problem by writing a traditional-style computer program, you would give the program step-by-step instructions on how to take these parts, how to assemble them into a robot with legs and then how to use those legs to walk to Point B. But when you're using AI to solve the problem, it goes differently. You don't tell it how to solve the problem, you just give it the goal, and it has to figure out for itself via trial and error how to reach that goal. And it turns out that the way AI tends to solve this particular problem is by doing this: it assembles itself into a tower and then falls over and lands at Point B. And technically, this solves the problem. Technically, it got to Point B. The danger of AI is not that it's going to rebel against us, it's that it's going to do exactly what we ask it to do. So then the trick of working with AI becomes: How do we set up the problem so that it actually does what we want?
比如,你想要人工智慧 把這一堆機器人的零件 組裝成某種機器人, 從 A 點走到 B 點。 如果你是用傳統的電腦程式方法來寫, 你就得要給程式一步一步的指令, 告訴它要拿哪些零件, 如何組裝成有腳的機器人, 接著告訴它如何用腳來走到 B 點。 但,若用人工智慧來解決 這個問題,做法就不同了。 你不用告訴它如何解決問題, 只要給它一個目標, 它自己要用嘗試錯誤法 來想辦法達成目標。 結果發現,人工智慧 解決這個問題的方法 傾向於用這種方式: 它會把它自己組裝成 一座塔,然後倒向 B, 就會到達 B 點了。 技術上來說,問題的確解決了。 它的確抵達了 B 點。 人工智慧的危險性 並不在於它會反抗我們, 而是它會「完全」照我們的要求去做。 所以使用人工智慧的秘訣就變成是: 我們要如何把問題設定好, 讓它真能照我們所想的去做?
So this little robot here is being controlled by an AI. The AI came up with a design for the robot legs and then figured out how to use them to get past all these obstacles. But when David Ha set up this experiment, he had to set it up with very, very strict limits on how big the AI was allowed to make the legs, because otherwise ...
這個小機器人是由人工智慧控制。 人工智慧構思出機器人的腳, 接著它再想出要如何 用腳來越過這些障礙。 但,當大衛‧哈在設計這個實驗時, 他得要訂下非常非常嚴格的限制, 限制人工智慧能把腳做到多大, 因為,若不限制……
(Laughter)
(笑聲)
And technically, it got to the end of that obstacle course. So you see how hard it is to get AI to do something as simple as just walk.
技術上,它也的確到了 障礙道的另一端。 所以大家可以了解,讓人工智慧 做出走路這麼簡單的事有多難了。
So seeing the AI do this, you may say, OK, no fair, you can't just be a tall tower and fall over, you have to actually, like, use legs to walk. And it turns out, that doesn't always work, either. This AI's job was to move fast. They didn't tell it that it had to run facing forward or that it couldn't use its arms. So this is what you get when you train AI to move fast, you get things like somersaulting and silly walks. It's really common. So is twitching along the floor in a heap.
看到人工智慧這麼做, 你可能會說,好,這不公平, 你不能變成高塔然後倒下來就到位, 你必須要真的用腳來走路。 結果發現,那也行不通。 這個人工智慧的工作 是要達成快速移動。 他們沒有告訴人工智慧說 跑步時一定要面對前方, 也沒說它不可以用手臂。 所以如果你訓練人工智慧要做到 快速移動,就會得到這種結果, 你會得到筋斗翻和很蠢的走路姿勢。 這很常見。 「成堆地沿著地板抽動」亦然。
(Laughter)
(笑聲)
So in my opinion, you know what should have been a whole lot weirder is the "Terminator" robots. Hacking "The Matrix" is another thing that AI will do if you give it a chance. So if you train an AI in a simulation, it will learn how to do things like hack into the simulation's math errors and harvest them for energy. Or it will figure out how to move faster by glitching repeatedly into the floor. When you're working with AI, it's less like working with another human and a lot more like working with some kind of weird force of nature. And it's really easy to accidentally give AI the wrong problem to solve, and often we don't realize that until something has actually gone wrong.
我認為更詭異的 是《魔鬼終結者》的機器人。 如果你給人工智慧機會, 它也會駭入《駭客任務》的母體。 如果在模擬狀況中訓練人工智慧, 它會學習如何做出的事包括 駭入模擬的數學錯誤中 並獲取它們作為能量。 或者,它會重覆在地板 鑽上鑽下使自己移動得更快。 和人工智慧合作比較像是 和某種大自然的詭異力量合作, 而不太像是和人類合作。 一不小心就會叫人工智慧 去解決不正確的問題, 且通常出錯後我們才會發現。
So here's an experiment I did, where I wanted the AI to copy paint colors, to invent new paint colors, given the list like the ones here on the left. And here's what the AI actually came up with.
我做了一個實驗, 我希望人工智慧能複製顏料顏色, 發明新的顏料顏色, 給它左側的這個清單。 這些是人工智慧創造出來的顏色。
[Sindis Poop, Turdly, Suffer, Gray Pubic]
〔辛迪司便便、混蛋、 苦難、灰色陰部〕
(Laughter)
(笑聲)
So technically, it did what I asked it to. I thought I was asking it for, like, nice paint color names, but what I was actually asking it to do was just imitate the kinds of letter combinations that it had seen in the original. And I didn't tell it anything about what words mean, or that there are maybe some words that it should avoid using in these paint colors. So its entire world is the data that I gave it. Like with the ice cream flavors, it doesn't know about anything else.
技術上來說, 它照我的意思做了。 我以為我要求人工智慧 給我好聽的色彩名稱, 但我實際上是在要求它 模仿它在原始顏色中 所見到的那些字母組合。 我沒有告訴它字的意思, 也沒告訴它可能有一些字 不太適合用在顏料顏色上。 它所有的訊息就僅是 我給它的資料而已。 和冰淇淋口味的例子一樣, 其他的它什麼都不知道。
So it is through the data that we often accidentally tell AI to do the wrong thing. This is a fish called a tench. And there was a group of researchers who trained an AI to identify this tench in pictures. But then when they asked it what part of the picture it was actually using to identify the fish, here's what it highlighted. Yes, those are human fingers. Why would it be looking for human fingers if it's trying to identify a fish? Well, it turns out that the tench is a trophy fish, and so in a lot of pictures that the AI had seen of this fish during training, the fish looked like this.
通常會因為資料內容的關係, 無意間讓人工智慧去執行錯誤的運作。 這是一種叫丁鱖的魚。 有一群研究人員 訓練人工智慧在照片中辨識出丁鱖。 但當他們問人工智慧,它是用 圖上的哪個部分來辨識出丁鱖, 結果它標記出這些部分。 是的,這些是人類的手指。 如果它的目標是要辨識出魚類, 為什麼要去找人類的手指? 結果發現,丁鱖是釣客的戰利品, 所以,人工智慧在訓練期間 所看到的大量丁鱖照片, 看起來像這樣。(笑聲)
(Laughter)
And it didn't know that the fingers aren't part of the fish.
人工智慧不知道手指並非魚的一部分。
So you see why it is so hard to design an AI that actually can understand what it's looking at. And this is why designing the image recognition in self-driving cars is so hard, and why so many self-driving car failures are because the AI got confused. I want to talk about an example from 2016. There was a fatal accident when somebody was using Tesla's autopilot AI, but instead of using it on the highway like it was designed for, they used it on city streets. And what happened was, a truck drove out in front of the car and the car failed to brake. Now, the AI definitely was trained to recognize trucks in pictures. But what it looks like happened is the AI was trained to recognize trucks on highway driving, where you would expect to see trucks from behind. Trucks on the side is not supposed to happen on a highway, and so when the AI saw this truck, it looks like the AI recognized it as most likely to be a road sign and therefore, safe to drive underneath.
這就是為什麼難以設計出 能看懂眼前事物為何的人工智慧。 這也就是為什麼在自動駕駛汽車上 設計影像辨識系統如此困難。 很多自動駕駛汽車會失敗 是因為困惑的人工智慧。 我想要談談 2016 年的一個例子。 某人在使用特斯拉的自動駕駛 人工智慧時發生了致命的意外, 它原本是設計行駛在高速公路上的, 但他們卻讓它行駛在城市街道上。 事情的經過是:有台卡車 經過這台車的前面, 這台車沒有煞車。 人工智慧一定有被訓練過 如何辨識出照片中的卡車。 但發生的狀況似乎是 人工智慧被訓練辨識出 在高速公路上行駛的卡車, 在高速公路上你會看到的 應該是卡車的車尾。 高速公路上不應該會看到卡車的側面, 所以,當人工智慧看到這台卡車時, 人工智慧似乎把它當作是路上的號誌, 因此判斷可以安全地從下方行駛過去。
Here's an AI misstep from a different field. Amazon recently had to give up on a résumé-sorting algorithm that they were working on when they discovered that the algorithm had learned to discriminate against women. What happened is they had trained it on example résumés of people who they had hired in the past. And from these examples, the AI learned to avoid the résumés of people who had gone to women's colleges or who had the word "women" somewhere in their resume, as in, "women's soccer team" or "Society of Women Engineers." The AI didn't know that it wasn't supposed to copy this particular thing that it had seen the humans do. And technically, it did what they asked it to do. They just accidentally asked it to do the wrong thing.
再來是另一個領域中的人工智慧過失。 亞馬遜最近必須要放棄他們 努力研發的履歷排序演算法, 因為他們發現演算法學會歧視女性。 他們用過去僱用員工的記錄資料 當作訓練人工智慧的範例。 從範例中,人工智慧學到不要選擇 上過女子大學的人, 也不要選擇在履歷中某處 寫到「女」字的人, 比如「女子足球隊」 或「女工程師協會」。 人工智慧看到人類這麼做, 它並不知道它不該複製這種模式。 技術上來說,它也照著 他們給它的指示做了。 他們只是不小心 叫人工智慧做了錯的事。
And this happens all the time with AI. AI can be really destructive and not know it. So the AIs that recommend new content in Facebook, in YouTube, they're optimized to increase the number of clicks and views. And unfortunately, one way that they have found of doing this is to recommend the content of conspiracy theories or bigotry. The AIs themselves don't have any concept of what this content actually is, and they don't have any concept of what the consequences might be of recommending this content.
人工智慧常常會發生這種狀況。 人工智慧可能造成破壞卻不自覺。 所以,在臉書、Youtube 上 負責做推薦的人工智慧, 它們進行了優化以增加點閱率次數。 不幸的是,它們發現, 達到目標的方法之一 就是推薦關於陰謀論或偏執的內容。 人工智慧本身 對於推薦的內容一無所知, 也對推薦這些內容會造成的後果 一無所悉。
So, when we're working with AI, it's up to us to avoid problems. And avoiding things going wrong, that may come down to the age-old problem of communication, where we as humans have to learn how to communicate with AI. We have to learn what AI is capable of doing and what it's not, and to understand that, with its tiny little worm brain, AI doesn't really understand what we're trying to ask it to do. So in other words, we have to be prepared to work with AI that's not the super-competent, all-knowing AI of science fiction. We have to be prepared to work with an AI that's the one that we actually have in the present day. And present-day AI is plenty weird enough.
當我們使用人工智慧時, 必須要由我們來避免問題。 我們若要避免出錯, 可能就得歸結到溝通的老問題上, 我們人類得要學習 如何和人工智慧溝通。 我們得要了解人工智慧 能夠做什麼、不能做什麼, 且要知道,人工智慧 只有小蟲等級的大腦, 它不知道我們真正想要它做什麼。 換言之,我們得有心理準備, 我們所使用的人工智慧 並非科幻小說裡那種 超能、無所不知的人工智慧。 我們必須準備好與現今還是 小蟲大腦等級的人工智慧共事。 而現今的人工智慧是非常怪異的。
Thank you.
謝謝。
(Applause)
(掌聲)