Hello, I'm Joy, a poet of code, on a mission to stop an unseen force that's rising, a force that I called "the coded gaze," my term for algorithmic bias.
你好 我叫玖伊 是個寫媒體程式的詩人 我的使命是 終止一個隱形力量的崛起 我稱這種力量為「數碼凝視」 是我替偏差演算法取的名稱
Algorithmic bias, like human bias, results in unfairness. However, algorithms, like viruses, can spread bias on a massive scale at a rapid pace. Algorithmic bias can also lead to exclusionary experiences and discriminatory practices. Let me show you what I mean.
偏差的演算法跟人的偏見一樣 會導致不公平的結果 然而演算法更像病毒 它傳播的偏見 大量而迅速 演算法偏差讓人 體驗到什麼叫做被排擠 也會導致差別對待 讓我告訴你我的意思
(Video) Joy Buolamwini: Hi, camera. I've got a face. Can you see my face? No-glasses face? You can see her face. What about my face? I've got a mask. Can you see my mask?
嗨 相機 我有一張臉 你能看見我的臉嗎? 不戴眼鏡呢? 你看得見她啊 那麼我的臉呢? 戴上面具 你看得見戴上面具嗎?
Joy Buolamwini: So how did this happen? Why am I sitting in front of a computer in a white mask, trying to be detected by a cheap webcam? Well, when I'm not fighting the coded gaze as a poet of code, I'm a graduate student at the MIT Media Lab, and there I have the opportunity to work on all sorts of whimsical projects, including the Aspire Mirror, a project I did so I could project digital masks onto my reflection. So in the morning, if I wanted to feel powerful, I could put on a lion. If I wanted to be uplifted, I might have a quote. So I used generic facial recognition software to build the system, but found it was really hard to test it unless I wore a white mask.
到底是怎麽回事? 我為什麽要坐在電腦前 戴著白色面具 好讓這台廉價的攝影機能看得見我 如果我沒有忙著對抗數碼凝視 當個媒體程式詩人 我就是麻省理工媒體實驗室的研究生 我在那裡從事一些稀奇古怪的計劃 包括照妖鏡 照妖鏡計劃 讓我能把數位面具投射在自己臉上 早上起來如果我需要強大的力量 我就投上一個獅子面具 如果我缺乏鬥志 我就放一段名人名言 因為我使用一般的臉部辨識軟體 來測試這個系統 結果竟然發現 電腦無法偵測到我 除非我戴上白色面具
Unfortunately, I've run into this issue before. When I was an undergraduate at Georgia Tech studying computer science, I used to work on social robots, and one of my tasks was to get a robot to play peek-a-boo, a simple turn-taking game where partners cover their face and then uncover it saying, "Peek-a-boo!" The problem is, peek-a-boo doesn't really work if I can't see you, and my robot couldn't see me. But I borrowed my roommate's face to get the project done, submitted the assignment, and figured, you know what, somebody else will solve this problem.
很不幸我之前就碰過這種問題 先前我在喬治亞理工學院 攻讀電腦科學學士學位時 我研究社交機器人 其中的一個實驗 就是和機器人玩躲貓貓 這個簡單的互動遊戲 讓對手先遮住臉再放開 同時要說 peek-a-boo 問題是如果看不到對方 遊戲就玩不下去了 我的機器人就是看不到我 最後我只好借我室友的臉來完成 做完實驗時我想 總有一天會有別人解決這個問題
Not too long after, I was in Hong Kong for an entrepreneurship competition. The organizers decided to take participants on a tour of local start-ups. One of the start-ups had a social robot, and they decided to do a demo. The demo worked on everybody until it got to me, and you can probably guess it. It couldn't detect my face. I asked the developers what was going on, and it turned out we had used the same generic facial recognition software. Halfway around the world, I learned that algorithmic bias can travel as quickly as it takes to download some files off of the internet.
不久之後 我去香港參加一個 業界舉辦的競技比賽 主辦單位先帶每位參賽者 去參觀當地的新創市場 其中一項就是社交機器人 當他們用社交機器人展示成果時 社交機器人對每個參賽者都有反應 直到遇到了我 接下來的情形你應該能想像 社交機器人怎樣都偵測不到我的臉 我問軟體開發人員是怎麼一回事 才驚覺當年通用的 人臉辨識軟體 竟然飄洋過海到了香港 偏差的演算邏輯快速散播 只要從網路下載幾個檔案就搞定了
So what's going on? Why isn't my face being detected? Well, we have to look at how we give machines sight. Computer vision uses machine learning techniques to do facial recognition. So how this works is, you create a training set with examples of faces. This is a face. This is a face. This is not a face. And over time, you can teach a computer how to recognize other faces. However, if the training sets aren't really that diverse, any face that deviates too much from the established norm will be harder to detect, which is what was happening to me.
為什麼機器人就是看不見我的臉? 得先知道我們如何賦予機器視力 電腦使用機器學習的技術 來辨識人臉 你必須用許多實作測試來訓練他們 這是人臉這是人臉這是人臉 這不是人臉 一而再再而三你就能教機器人 辨識其他的人臉 但是如果實作測試不夠多樣化 當出現的人臉 與既定規範相去太遠時 電腦就很難判斷了 我的親身經驗就是這樣
But don't worry -- there's some good news. Training sets don't just materialize out of nowhere. We actually can create them. So there's an opportunity to create full-spectrum training sets that reflect a richer portrait of humanity.
但別慌張 有好消息 實作測試並不是無中生有 事實上我們能夠建的 我們可以有一套更周詳的測試樣本 涵蓋人種的多樣性
Now you've seen in my examples how social robots was how I found out about exclusion with algorithmic bias. But algorithmic bias can also lead to discriminatory practices. Across the US, police departments are starting to use facial recognition software in their crime-fighting arsenal. Georgetown Law published a report showing that one in two adults in the US -- that's 117 million people -- have their faces in facial recognition networks. Police departments can currently look at these networks unregulated, using algorithms that have not been audited for accuracy. Yet we know facial recognition is not fail proof, and labeling faces consistently remains a challenge. You might have seen this on Facebook. My friends and I laugh all the time when we see other people mislabeled in our photos. But misidentifying a suspected criminal is no laughing matter, nor is breaching civil liberties.
我的實驗說明了 社交機器人 產生排他現象 因為偏差的演算邏輯 偏差的演算邏輯 也可能讓偏見成為一種習慣 美國各地的警方 正開始使用這套人臉辨識軟體 來建立警方的打擊犯罪系統 喬治城大學法律中心的報告指出 每兩個美國成年人就有一個人 也就是一億一千七百萬筆臉部資料 在美國警方這套系統裡 警方這套系統既缺乏規範 也缺乏正確合法的演算邏輯 你要知道人臉辨識並非萬無一失 要一貫正確地標註人臉 往往不是那麼容易 或許你在臉書上看過 朋友和我常覺得很好笑 看見有人標註朋友卻標錯了 如果標錯的是犯人的臉呢 那就讓人笑不出來了 侵害公民自由也同樣讓人笑不出來
Machine learning is being used for facial recognition, but it's also extending beyond the realm of computer vision. In her book, "Weapons of Math Destruction," data scientist Cathy O'Neil talks about the rising new WMDs -- widespread, mysterious and destructive algorithms that are increasingly being used to make decisions that impact more aspects of our lives. So who gets hired or fired? Do you get that loan? Do you get insurance? Are you admitted into the college you wanted to get into? Do you and I pay the same price for the same product purchased on the same platform?
不僅辨識人臉倚賴機器學習的技術 許多領域其實都要用到機器學習 《大數據的傲慢與偏見》 這本書的作者 數據科學家凱西 歐尼爾 談到新 WMD 勢力的崛起 WMD 是廣泛 神秘和具破壞性的算法 演算法漸漸取代我們做決定 影響我們生活的更多層面 例如誰升了官?誰丟了飯碗? 你借到錢了嗎?你買保險了嗎? 你進入心目中理想的大學了嗎? 我們花同樣多的錢在同樣的平台上 買到同樣的產品嗎?
Law enforcement is also starting to use machine learning for predictive policing. Some judges use machine-generated risk scores to determine how long an individual is going to spend in prison. So we really have to think about these decisions. Are they fair? And we've seen that algorithmic bias doesn't necessarily always lead to fair outcomes.
警方也開始使用機器學習 來防範犯罪 法官根據電腦顯示的危險因子數據 來決定一個人要在監獄待幾年 我們得仔細想想這些判定 它們真的公平嗎? 我們親眼看見偏差的演算邏輯 未必做出正確的判斷
So what can we do about it? Well, we can start thinking about how we create more inclusive code and employ inclusive coding practices. It really starts with people. So who codes matters. Are we creating full-spectrum teams with diverse individuals who can check each other's blind spots? On the technical side, how we code matters. Are we factoring in fairness as we're developing systems? And finally, why we code matters. We've used tools of computational creation to unlock immense wealth. We now have the opportunity to unlock even greater equality if we make social change a priority and not an afterthought. And so these are the three tenets that will make up the "incoding" movement. Who codes matters, how we code matters and why we code matters.
我們該怎麽辦呢? 我們要先確定程式碼是否具多樣性 以及寫程式的過程是否周詳 事實上全都始於人 程式是誰寫的有關係 寫程式的團隊是否由 多元的個體組成呢? 這樣才能互補並找出彼此的盲點 從技術面而言 我們如何寫程式很重要 我們是否對公平這項要素 在系統開發階段就考量到呢? 最後 我們為什麼寫程式也重要 我們使用計算創造工具 開啟了巨額財富之門 我們現在有機會實現更大的平等 如果我們將社會變革作為優先事項 而不是事後的想法 這裡有改革程式的三元素 程式是誰寫的重要 如何寫程式重要 以及為何寫程式重要
So to go towards incoding, we can start thinking about building platforms that can identify bias by collecting people's experiences like the ones I shared, but also auditing existing software. We can also start to create more inclusive training sets. Imagine a "Selfies for Inclusion" campaign where you and I can help developers test and create more inclusive training sets. And we can also start thinking more conscientiously about the social impact of the technology that we're developing.
要成功改革程式 我們可以先從建立能夠 找出偏差的分析平台開始 作法是收集人們的親身經歷 像是我剛才分享的經歷 也檢視現存的軟體 我們可以著手建立 更具包容性的測試樣本 想像「包容的自拍」活動 我們可以幫助開發人員測試和創建 更具包容性的測試樣本 我們也要更自省 我們發展的科技帶給社會的衝擊
To get the incoding movement started, I've launched the Algorithmic Justice League, where anyone who cares about fairness can help fight the coded gaze. On codedgaze.com, you can report bias, request audits, become a tester and join the ongoing conversation, #codedgaze.
為了著手程式改革 我發起了「演算邏輯正義聯盟」 只要你贊同公平 就可以加入打擊數碼凝視的行列 只要上 codedgaze.com 網路 可以舉報你發現的偏差演算邏輯 可以申請測試 可以成為受測者 也可以加入論壇 只要搜尋 #codedgaze
So I invite you to join me in creating a world where technology works for all of us, not just some of us, a world where we value inclusion and center social change.
我在此邀請大家加入我的行列 創造一個技術適用於 我們所有人的世界 而不是只適用於某些人 一個重視包容性 和以社會變革為中心的世界
Thank you.
謝謝
(Applause)
(掌聲)
But I have one question: Will you join me in the fight?
我還有一個問題 你要和我並肩作戰嗎?
(Laughter)
(笑聲)
(Applause)
(掌聲)