Hello, I'm Joy, a poet of code, on a mission to stop an unseen force that's rising, a force that I called "the coded gaze," my term for algorithmic bias.
Zdravo, ja sam Džoj, pesnikinja kodova, na misiji da zaustavim neviđenu silu u usponu, silu koju nazivam „kodirani pogled“, što je moj termin za algoritamsku pristrasnost.
Algorithmic bias, like human bias, results in unfairness. However, algorithms, like viruses, can spread bias on a massive scale at a rapid pace. Algorithmic bias can also lead to exclusionary experiences and discriminatory practices. Let me show you what I mean.
Algoritamska pristrasnost, kao i ljudska, ima za posledicu nepravednost. Međutim, algoritmi, poput virusa, mogu raširiti pristrasnost u ogromnoj meri velikom brzinom. Algoritamska pristrasnost može dovesti i do izloženosti isključivanju i prakse diskriminacije. Dozvolite da vam pokažem šta hoću da kažem.
(Video) Joy Buolamwini: Hi, camera. I've got a face. Can you see my face? No-glasses face? You can see her face. What about my face? I've got a mask. Can you see my mask?
(Video) Džoj Buolamvini: Zdravo, kamero. Imam lice. Možeš li da vidiš moje lice? Lice bez naočara? Možeš videti njeno lice. A moje? Imam masku. Možeš li da vidiš moju masku?
Joy Buolamwini: So how did this happen? Why am I sitting in front of a computer in a white mask, trying to be detected by a cheap webcam? Well, when I'm not fighting the coded gaze as a poet of code, I'm a graduate student at the MIT Media Lab, and there I have the opportunity to work on all sorts of whimsical projects, including the Aspire Mirror, a project I did so I could project digital masks onto my reflection. So in the morning, if I wanted to feel powerful, I could put on a lion. If I wanted to be uplifted, I might have a quote. So I used generic facial recognition software to build the system, but found it was really hard to test it unless I wore a white mask.
Džoj Buolamvini: Pa, kako se ovo dogodilo? Zašto sedim ispred kompjutera sa belom maskom, pokušavajući da me prepozna jeftina kamera? Kada se ne borim protiv kodiranog pogleda kao pesnikinja kodova, postdiplomac sam medijske laboratorije MIT-a i tamo imam priliku da radim na raznim neobičnim projektima, uključujući „Ogledalo aspiracije“, projekat koji sam sprovela tako da mogu da projektujem digitalne maske na svoj odraz. Tako bih ujutru, ako želim da se osećam snažno, mogla da stavim lava. Ako bih htela da podignem raspoloženje, možda bih dobila citat. Koristila sam generički softver za prepoznavanje lica da bih napravila sistem, ali sam otkrila da ga je teško testirati ukoliko ne nosim belu masku.
Unfortunately, I've run into this issue before. When I was an undergraduate at Georgia Tech studying computer science, I used to work on social robots, and one of my tasks was to get a robot to play peek-a-boo, a simple turn-taking game where partners cover their face and then uncover it saying, "Peek-a-boo!" The problem is, peek-a-boo doesn't really work if I can't see you, and my robot couldn't see me. But I borrowed my roommate's face to get the project done, submitted the assignment, and figured, you know what, somebody else will solve this problem.
Nažalost, već sam ranije nailazila na ovaj problem. Kada sam bila na osnovnim studijama na Tehnološkom institutu u Džordžiji, gde sam studirala informatiku, radila sam na društvenim robotima, a jedan od mojih zadataka je bio da navedem robota da se igra skrivanja, jednostavne igre menjanja uloga u kojoj partneri pokrivaju lice, a zatim ga otkriju i kažu: „Uja!“ Problem je što igra skrivanja ne funkcioniše ako ne mogu da vas vidim, a moj robot nije mogao da me vidi. No, pozajmila sam lice svoje cimerke da bih završila projekat, predala sam zadatak, i mislila sam: „Znate šta, neko drugi će rešiti ovaj problem.“
Not too long after, I was in Hong Kong for an entrepreneurship competition. The organizers decided to take participants on a tour of local start-ups. One of the start-ups had a social robot, and they decided to do a demo. The demo worked on everybody until it got to me, and you can probably guess it. It couldn't detect my face. I asked the developers what was going on, and it turned out we had used the same generic facial recognition software. Halfway around the world, I learned that algorithmic bias can travel as quickly as it takes to download some files off of the internet.
Nedugo potom, bila sam u Hongkongu na takmičenju preduzetnika. Organizatori su rešili da povedu učesnike u obilazak lokalnih startapova. Jedan od startapova imao je društvenog robota i rešili su da naprave demonstraciju. Demonstracija je radila kod svih dok nisu stigli do mene i možete verovatno pretpostaviti šta se dogodilo. Nije mogao da prepozna moje lice. Pitala sam programere šta se dešava i ispostavilo se da smo koristili isti generički softver prepoznavanja lica. Preko pola sveta, saznala sam da algoritamska pristrasnost može putovati toliko brzo koliko treba da se skine nešto fajlova sa interneta.
So what's going on? Why isn't my face being detected? Well, we have to look at how we give machines sight. Computer vision uses machine learning techniques to do facial recognition. So how this works is, you create a training set with examples of faces. This is a face. This is a face. This is not a face. And over time, you can teach a computer how to recognize other faces. However, if the training sets aren't really that diverse, any face that deviates too much from the established norm will be harder to detect, which is what was happening to me.
Pa, šta se dešava? Zašto se moje lice ne prepoznaje? Pa, moramo pogledati kako mašini dajemo vid. Kompjuterski vid koristi tehnike mašinskog učenja da bi prepoznao lica. To funkcioniše tako što napravite komplet za vežbanje sa primerima lica. Ovo je lice. Ovo je lice. Ovo nije lice. Vremenom možete naučiti kompjuter kako da prepoznaje druga lica. Međutim, ako kompleti za vežbanje baš i nisu tako raznovrsni, svako lice koje previše odstupa od uspostavljene norme biće teže da se prepozna, a to je ono što se događa sa mnom.
But don't worry -- there's some good news. Training sets don't just materialize out of nowhere. We actually can create them. So there's an opportunity to create full-spectrum training sets that reflect a richer portrait of humanity.
Ali ne brinite, ima dobrih vesti. Kompleti za vežbanje ne dolaze tek tako niotkuda. Možemo ih stvoriti. Postoji mogućnost za stvaranje kompleta za vežbu celokupnog spektra koji odražavaju bogatiji portret čovečanstva.
Now you've seen in my examples how social robots was how I found out about exclusion with algorithmic bias. But algorithmic bias can also lead to discriminatory practices. Across the US, police departments are starting to use facial recognition software in their crime-fighting arsenal. Georgetown Law published a report showing that one in two adults in the US -- that's 117 million people -- have their faces in facial recognition networks. Police departments can currently look at these networks unregulated, using algorithms that have not been audited for accuracy. Yet we know facial recognition is not fail proof, and labeling faces consistently remains a challenge. You might have seen this on Facebook. My friends and I laugh all the time when we see other people mislabeled in our photos. But misidentifying a suspected criminal is no laughing matter, nor is breaching civil liberties.
Videli ste u mojim primerima da sam preko društvenih robota saznala za isključivanje kroz algoritamsku pristrasnost. Ali algoritamska pristrasnost može dovesti i do prakse diskriminacije. Širom SAD-a, policijske uprave počinju da koriste softver za prepoznavanje lica u svom arsenalu za borbu protiv kriminala. Zakon Džordžtauna je objavio izveštaj koji pokazuje da se jednoj od dve odrasle osobe u SAD-u - to je 117 miliona ljudi - lice nalazi u mrežama za prepoznavanje lica. Odeljenja policije trenutno mogu da neregulisano pregledaju ove mreže, pomoću algoritama kojima nije proverena tačnost. Znamo da prepoznavanje lica nije bez mane, a naznačavanje lica stalno ostaje izazov. Možda ste to videli na Fejsbuku. Moji prijatelji i ja se uvek smejemo kad vidimo druge ljude koji su pogrešno označeni na našim fotografijama. Ali pogrešno identifikovanje osumnjičenog zločinca nije za smejanje, kao ni kršenje građanskih sloboda.
Machine learning is being used for facial recognition, but it's also extending beyond the realm of computer vision. In her book, "Weapons of Math Destruction," data scientist Cathy O'Neil talks about the rising new WMDs -- widespread, mysterious and destructive algorithms that are increasingly being used to make decisions that impact more aspects of our lives. So who gets hired or fired? Do you get that loan? Do you get insurance? Are you admitted into the college you wanted to get into? Do you and I pay the same price for the same product purchased on the same platform?
Mašinsko učenje se koristi za prepoznavanje lica, ali se takođe proteže i van dometa kompjuterskog vida. U svojoj knjizi „Oružja za matematičko uništenje“, naučnica u oblasti podataka Keti O'Nil govori o usponu novih RMD-a, rasprostranjenih, misterioznih i destruktivnih algoritama koji se sve više koriste za donošenje odluka koje utiču na sve više aspekata našeg života. Koga će zaposliti ili otpustiti? Da li ćete dobiti taj kredit? Da li ćete dobiti osiguranje? Da li ste primljeni na fakultet u koji ste želeli da upadnete? Da li vi i ja plaćamo istu cenu za isti proizvod kupljen na istoj platformi?
Law enforcement is also starting to use machine learning for predictive policing. Some judges use machine-generated risk scores to determine how long an individual is going to spend in prison. So we really have to think about these decisions. Are they fair? And we've seen that algorithmic bias doesn't necessarily always lead to fair outcomes.
Sprovođenje zakona takođe počinje da koristi mašinsko učenje za prediktivni rad policije. Neke sudije koriste mašinski generisane procene rizika da bi odredile koliko vremena će neki pojedinac provesti u zatvoru. Zato zaista treba da razmislimo o ovim odlukama. Jesu li pravedne? A videli smo da algoritamske predrasude ne dovode nužno uvek do pravednih ishoda.
So what can we do about it? Well, we can start thinking about how we create more inclusive code and employ inclusive coding practices. It really starts with people. So who codes matters. Are we creating full-spectrum teams with diverse individuals who can check each other's blind spots? On the technical side, how we code matters. Are we factoring in fairness as we're developing systems? And finally, why we code matters. We've used tools of computational creation to unlock immense wealth. We now have the opportunity to unlock even greater equality if we make social change a priority and not an afterthought. And so these are the three tenets that will make up the "incoding" movement. Who codes matters, how we code matters and why we code matters.
Šta možemo da uradimo u vezi sa time? Pa, možemo početi da razmišljamo o tome kako da stvorimo inkluzivniji kod i da upotrebimo inkluzivne postupke kodiranja. To zapravo počinje od ljudi. Zato je bitno ko kodira. Da li kreiramo timove celokupnog spektra sa različitim pojedincima koji mogu da jedno drugome ispitaju stvari za koje su slepi? Sa tehničke strane, bitno je kako kodiramo. Da li uzimamo u obzir pravičnost dok razvijamo sisteme? I najzad, bitno je zašto kodiramo. Koristili smo alate računarskog stvaranja da bismo otključali ogromno bogatstvo. Sada imamo priliku da otključamo još veću ravnopravnost ako učinimo društvene promene prioritetom, a ne da ih naknadno promišljamo. Dakle, ovo su tri principa koji će sačinjavati pokret „inkodiranja“. Bitno je ko kodira, kako kodiramo i zašto kodiramo.
So to go towards incoding, we can start thinking about building platforms that can identify bias by collecting people's experiences like the ones I shared, but also auditing existing software. We can also start to create more inclusive training sets. Imagine a "Selfies for Inclusion" campaign where you and I can help developers test and create more inclusive training sets. And we can also start thinking more conscientiously about the social impact of the technology that we're developing.
Stoga, da bismo išli u pravcu inkodiranja, možemo početi da razmišljamo o izgradnji platforma koje mogu da identifikuju pristrasnost prikupljanjem iskustava ljudi poput onih koje sam podelila, ali i pregledom postojećeg softvera. Takođe možemo početi da stvaramo inkluzivnije komplete za vežbanje. Zamislite kampanju „Selfiji za inkluziju“ u kojoj vi i ja možemo pomoći programerima da testiraju i naprave inkluzivnije komplete za vežbanje. Takođe možemo početi da savesnije razmišljamo o društvenom uticaju tehnologije koju razvijamo.
To get the incoding movement started, I've launched the Algorithmic Justice League, where anyone who cares about fairness can help fight the coded gaze. On codedgaze.com, you can report bias, request audits, become a tester and join the ongoing conversation, #codedgaze.
Da bih otpočela pokret inkodiranja, pokrenula sam Ligu za algoritamsku pravdu, gde svako ko se brine o pravičnosti može pomoći u borbi protiv kodiranog pogleda. Na codedgaze.com možete prijaviti pristrasnost, zatražiti proveru, postati tester i pridružiti se aktuelnom razgovoru, #codedgaze.
So I invite you to join me in creating a world where technology works for all of us, not just some of us, a world where we value inclusion and center social change.
Pozivam vas da mi se pridružite u stvaranju sveta u kome tehnologija radi za sve nas, a ne samo za neke od nas, sveta u kome cenimo inkluziju i stavljamo u središte društvene promene.
Thank you.
Hvala.
(Applause)
(Aplauz)
But I have one question: Will you join me in the fight?
Ali imam jedno pitanje. Hoćete li mi se pridružiti u borbi?
(Laughter)
(Smeh)
(Applause)
(Aplauz)