How many decisions have been made about you today, or this week or this year, by artificial intelligence? I build AI for a living so, full disclosure, I'm kind of a nerd. And because I'm kind of a nerd, wherever some new news story comes out about artificial intelligence stealing all our jobs, or robots getting citizenship of an actual country, I'm the person my friends and followers message freaking out about the future.
今天,有多少與你有關的決策, 或這週,或今年, 是由人工智慧所做的? 我的工作是建造人工智慧, 所以,不隱瞞大家,我是個怪胎。 因為我是個怪胎, 每當有新的新聞報導出來, 內容有談到人工智慧 偷走我們所有的工作, 或是機器人取得 實際國家的公民權, 我的朋友和追隨者 就會發訊息給我, 表示對於未來的擔憂。
We see this everywhere. This media panic that our robot overlords are taking over. We could blame Hollywood for that. But in reality, that's not the problem we should be focusing on. There is a more pressing danger, a bigger risk with AI, that we need to fix first. So we are back to this question: How many decisions have been made about you today by AI? And how many of these were based on your gender, your race or your background?
這種狀況處處可見。 這種認為機器人統治者 會接管世界的媒體恐慌。 我們可以怪罪於好萊塢。 但,在現實中,我們不該 把焦點放在那個問題上。 還有更迫切的危機, 人工智慧有個更大的風險, 我們應該要先解決它。 所以,回到這個問題: 今天,人工智慧做了 多少關於你的決策? 這些決策中,有多少 是根據你的性別、你的種族, 或你的背景所做出來的?
Algorithms are being used all the time to make decisions about who we are and what we want. Some of the women in this room will know what I'm talking about if you've been made to sit through those pregnancy test adverts on YouTube like 1,000 times. Or you've scrolled past adverts of fertility clinics on your Facebook feed. Or in my case, Indian marriage bureaus.
演算法常常被拿來使用, 做出關於我們是什麼人、 我們想要什麼的相關決策。 這間房間中有一些女性 知道我在說什麼, 如果你曾經坐在電腦前 看 YouTube 時, 被迫看完驗孕測試的廣告, 且發生過約一千次的話。 或者,你曾經在滑手機 看臉書動態時報時 一直看到不孕症診所的廣告。 或者,我的例子則是看到 印度婚姻介紹所的廣告。
(Laughter)
(笑聲)
But AI isn't just being used to make decisions about what products we want to buy or which show we want to binge watch next. I wonder how you'd feel about someone who thought things like this: "A black or Latino person is less likely than a white person to pay off their loan on time." "A person called John makes a better programmer than a person called Mary." "A black man is more likely to be a repeat offender than a white man." You're probably thinking, "Wow, that sounds like a pretty sexist, racist person," right? These are some real decisions that AI has made very recently, based on the biases it has learned from us, from the humans. AI is being used to help decide whether or not you get that job interview; how much you pay for your car insurance; how good your credit score is; and even what rating you get in your annual performance review. But these decisions are all being filtered through its assumptions about our identity, our race, our gender, our age. How is that happening?
但,人工智慧不只是被用來判定 我們想要買什麼產品, 或是我們接下來想要看追哪齣劇。 我很好奇,對於這樣想的人, 你們有何感覺: 「黑人或拉丁裔的人 準時還清貸款的可能性 沒有白人高。」 「名字叫做約翰的人, 和名叫瑪莉的人相比, 會是比較好的程式設計師。」 「比起白人,黑人比較 有可能會再次犯罪。」 你們可能在想: 「哇,那聽起來是性別主義 和種族主義的人會說的話」對吧? 上述這些是人工智慧 近期所做出的一些決策, 決策依據是它向我們學來的偏見, 向人類學來的。 人工智慧被用來協助判斷 你是否能參加工作面試; 你的汽車保險保費是多少錢; 你的信用評等有多好; 甚至你在年度考績中 得到的評級是多少。 但這些決策都會先被過濾過, 過濾依據就是關於我們的身分、 種族、性別、年齡等的假設。 為什麼會發生這種事?
Now, imagine an AI is helping a hiring manager find the next tech leader in the company. So far, the manager has been hiring mostly men. So the AI learns men are more likely to be programmers than women. And it's a very short leap from there to: men make better programmers than women. We have reinforced our own bias into the AI. And now, it's screening out female candidates. Hang on, if a human hiring manager did that, we'd be outraged, we wouldn't allow it. This kind of gender discrimination is not OK. And yet somehow, AI has become above the law, because a machine made the decision. That's not it.
想像一下,人工智慧在協助 一位有人才需求的經理 尋找該公司的下一位技術主管。 目前,這位經理僱用的人 大部分都是男性。 所以人工智慧學到的是,男性 比女性更有可能成為程式設計師。 很容易就會從 這個現象直接下結論: 男性程式設計師比女性好。 我們把我們自己的偏見 灌輸給人工智慧。 現在,它就會把 女性候選人給篩掉。 等等,如果人類的經理這樣做, 我們會很火大, 我們不會容忍這種事。 這種性別歧視是不對的。 但,人工智慧卻以 某種方式超越了法律, 因為那個決策是機器做出來的。 不只如此。
We are also reinforcing our bias in how we interact with AI. How often do you use a voice assistant like Siri, Alexa or even Cortana? They all have two things in common: one, they can never get my name right, and second, they are all female. They are designed to be our obedient servants, turning your lights on and off, ordering your shopping. You get male AIs too, but they tend to be more high-powered, like IBM Watson, making business decisions, Salesforce Einstein or ROSS, the robot lawyer. So poor robots, even they suffer from sexism in the workplace.
我們和人工智慧的互動, 也加強了我們自己的偏見。 你們有多常使用語音助手,比如 Siri、Alexa,或甚至 Cortana? 它們全都有兩項共通點: 第一,它們總是把我的名字弄錯, 第二,它們都是女性。 它們被設計為順從我們的僕人, 幫你開燈、關燈,幫你下單購物。 也會有男性的人工智慧, 但通常它們的功能比較強, 比如 IBM 的 Watson, 做的是商業決策, 又如 Salesforce Einstein, 或是機器律師 ROSS 。 可憐的機器人,連它們也會 遇到工作場所的性別主義。
(Laughter)
(笑聲)
Think about how these two things combine and affect a kid growing up in today's world around AI. So they're doing some research for a school project and they Google images of CEO. The algorithm shows them results of mostly men. And now, they Google personal assistant. As you can guess, it shows them mostly females. And then they want to put on some music, and maybe order some food, and now, they are barking orders at an obedient female voice assistant. Some of our brightest minds are creating this technology today. Technology that they could have created in any way they wanted. And yet, they have chosen to create it in the style of 1950s "Mad Man" secretary. Yay!
想想看,當這兩點結合起來時, 會如何影響現今世界中與人工智慧 一同生活的孩子成長。 所以,他們為了一項學校 專案計畫做了些研究, 他們用 Google 搜尋了 執行長的形象。 演算法呈現給他們看的搜尋結果, 大部分都是男性。 他們又搜尋了個人助理。 你們可以猜到,呈現出來的 搜尋結果大部分是女性。 接著,他們想要播放音樂, 也許再點一些食物來吃, 現在,他們便大聲喊出命令, 要順從的女性語音助手去做。 一些最聰明的天才們 創造出現今的這種技術。 他們可以依他們想要的 任何方式來創造這種技術。 但,他們選擇採用五〇年代 《廣告狂人》中的秘書風格。 好呀!
But OK, don't worry, this is not going to end with me telling you that we are all heading towards sexist, racist machines running the world. The good news about AI is that it is entirely within our control. We get to teach the right values, the right ethics to AI. So there are three things we can do. One, we can be aware of our own biases and the bias in machines around us. Two, we can make sure that diverse teams are building this technology. And three, we have to give it diverse experiences to learn from. I can talk about the first two from personal experience. When you work in technology and you don't look like a Mark Zuckerberg or Elon Musk, your life is a little bit difficult, your ability gets questioned.
但,好,別擔心, 演說的結尾不會是我告訴各位 我們正在邁向一個由性別主義、 種族主義的機器所統治的世界。 關於人工智慧的好消息是, 它完全在我們的掌控當中。 我們可以教導人工智慧 正確的價值觀、正確的倫理。 有三件事是我們可以做的。 第一,我們可以意識到 我們自己有偏見存在, 以及我們身邊的機器也有偏見。 第二,我們可以確保這項技術 是由多樣化的團隊來建造。 第三,我們要提供多樣化的經驗, 讓這項技術從中學習。 我可以從個人經歷來談前兩點。 當你在科技業工作, 且你看起來並不像是 馬克祖克柏或伊隆馬斯克, 你的生活就會有一點辛苦, 你的能力會被質疑。
Here's just one example. Like most developers, I often join online tech forums and share my knowledge to help others. And I've found, when I log on as myself, with my own photo, my own name, I tend to get questions or comments like this: "What makes you think you're qualified to talk about AI?" "What makes you think you know about machine learning?" So, as you do, I made a new profile, and this time, instead of my own picture, I chose a cat with a jet pack on it. And I chose a name that did not reveal my gender. You can probably guess where this is going, right? So, this time, I didn't get any of those patronizing comments about my ability and I was able to actually get some work done. And it sucks, guys. I've been building robots since I was 15, I have a few degrees in computer science, and yet, I had to hide my gender in order for my work to be taken seriously.
這只是一個例子。 和大部分的開發者一樣, 我通常會加入線上技術討論區, 分享我的知識來協助他人。 而我發現, 當我用自己登入,放我自己的 照片,用我自己的名字, 我常常會得到這樣的問題或意見: 「你怎麼會認為 你有資格談論人工智慧?」 「你怎麼會認為 你了解機器學習?」 所以,跟大家一樣, 我會做個新的個人檔案, 這次,我不放自己的照片, 我選的照片是一隻背著 噴氣飛行器的貓。 我選用的名字看不出性別。 你們應該猜得出後續發展,對吧? 所以,這次,那些高人一等的人 完全沒有對我的能力提出意見, 我還真的能完成一些事。 各位,這真的很鳥。 我從十五歲時就在建造機器人了, 我有幾個資訊科學的學位, 但,我還是得隱瞞我的性別, 我所做的事才會被認真看待。
So, what's going on here? Are men just better at technology than women? Another study found that when women coders on one platform hid their gender, like myself, their code was accepted four percent more than men. So this is not about the talent. This is about an elitism in AI that says a programmer needs to look like a certain person. What we really need to do to make AI better is bring people from all kinds of backgrounds. We need people who can write and tell stories to help us create personalities of AI. We need people who can solve problems. We need people who face different challenges and we need people who can tell us what are the real issues that need fixing and help us find ways that technology can actually fix it. Because, when people from diverse backgrounds come together, when we build things in the right way, the possibilities are limitless.
這是怎麼回事? 在科技上,男人就是 比女人厲害嗎? 另一項研究發現, 在平台上,當女性編碼者 像我這樣隱瞞自己的性別時, 她們的程式碼被接受的 比率比男性高 4%。 重點並不是才華。 重點是人工智慧領域的精英主義, 它說,程式設計師必須要 看起來像是某種人。 若想要讓人工智慧更好, 我們需要做的事情 是集合各種背景的人。 我們需要能夠寫故事、說故事的人 來協助我們創造出 人工智慧的人格。 我們需要能夠解決問題的人。 我們需要能夠面對不同挑戰的人, 我們需要能夠告訴我們 真正需要修正的問題是什麼, 且協助我們想辦法 用科技來修正它的人。 因為,當來自多樣化 背景的人集結在一起, 當我們用對的方式建造新東西時, 就會有無限的可能性。
And that's what I want to end by talking to you about. Less racist robots, less machines that are going to take our jobs -- and more about what technology can actually achieve. So, yes, some of the energy in the world of AI, in the world of technology is going to be about what ads you see on your stream. But a lot of it is going towards making the world so much better. Think about a pregnant woman in the Democratic Republic of Congo, who has to walk 17 hours to her nearest rural prenatal clinic to get a checkup. What if she could get diagnosis on her phone, instead? Or think about what AI could do for those one in three women in South Africa who face domestic violence. If it wasn't safe to talk out loud, they could get an AI service to raise alarm, get financial and legal advice. These are all real examples of projects that people, including myself, are working on right now, using AI.
我希望用這一點 來結束今天的演說。 少談種族主義的機器人、 少談機器會搶走我們的工作—— 多談科技能夠達成什麼。 所以,是的,在人工智慧 世界中的某些能量, 在科技世界中的某些能量, 會被用來決定放什麼廣告 到你的串流中。 但有很大一部分的目的 會是要讓世界變得更好。 想想看,在剛果民主 共和國的懷孕女子, 她得要走十七小時的路, 才能到達最近的鄉村婦產科診所, 去做一次檢查。 如果她能夠改用她的手機 來取得診斷呢? 或者,想想人工智慧能做什麼, 來協助解決南非有三分之一女性 要面對家暴的問題。 如果大聲談論並不安全, 她們可以透過人工智慧服務來求援, 取得財務和法律建議。 這些例子都是目前 有人在利用人工智慧 進行的專案計畫,包括我在內。
So, I'm sure in the next couple of days there will be yet another news story about the existential risk, robots taking over and coming for your jobs.
我相信,在接下來幾天, 還會有另一則新聞報導, 談及生存危機、 機器人即將來搶走你的工作。
(Laughter)
(笑聲)
And when something like that happens, I know I'll get the same messages worrying about the future. But I feel incredibly positive about this technology. This is our chance to remake the world into a much more equal place. But to do that, we need to build it the right way from the get go. We need people of different genders, races, sexualities and backgrounds. We need women to be the makers and not just the machines who do the makers' bidding. We need to think very carefully what we teach machines, what data we give them, so they don't just repeat our own past mistakes. So I hope I leave you thinking about two things. First, I hope you leave thinking about bias today. And that the next time you scroll past an advert that assumes you are interested in fertility clinics or online betting websites, that you think and remember that the same technology is assuming that a black man will reoffend. Or that a woman is more likely to be a personal assistant than a CEO. And I hope that reminds you that we need to do something about it.
當發生這樣的狀況時, 我知道我又會收到 關於擔心未來的訊息。 但我對這項技術的感受 是非常正面的。 這是一個機會, 我們可以把世界重建, 成為更平等的地方。 但,若想做到這個目標, 打從一開始就要用對方式。 我們需要不同性別、 種族、性向,和背景的人。 我們需要女性來當創造者, 不只是會照著創造者的 命令做事的機器。 我們得要非常小心地思考 我們要教導機器什麼, 要給它們什麼資料, 以免它們重蹈我們過去的覆轍。 我希望留下兩件事讓各位思考。 第一,我希望大家離開這裡之後, 能想想現今的偏見。 下次當你滑手機看到廣告時, 且廣告內容是假設 你想了解不孕症診所 或線上賭博網站, 你就要思考一下並想起來, 這項技術也同樣會假設 黑人會再犯罪。 或者女性比較有可能 成為個人助理而非執行長。 我希望那能夠提醒各位, 我們得要採取行動。
And second, I hope you think about the fact that you don't need to look a certain way or have a certain background in engineering or technology to create AI, which is going to be a phenomenal force for our future. You don't need to look like a Mark Zuckerberg, you can look like me. And it is up to all of us in this room to convince the governments and the corporations to build AI technology for everyone, including the edge cases. And for us all to get education about this phenomenal technology in the future. Because if we do that, then we've only just scratched the surface of what we can achieve with AI.
第二, 我希望大家能想想, 你並不需要有某種外表 或某種工程或科技背景, 才能創造人工智慧, 它將會是我們未來的 一股驚人力量。 你不需要看起來像馬克祖克柏, 你可以看起來像我。 要靠我們這間房間的所有人, 來說服政府和企業, 為每個人建造人工智慧技術, 包括邊緣的個案。 而我們所有人將來都應該要 接受關於這項重大技術的教育。 因為,如果能這麼做, 那麼我們還能夠用人工智慧 做出更多了不起的事。
Thank you.
謝謝。
(Applause)
(掌聲)