So, I started my first job as a computer programmer in my very first year of college -- basically, as a teenager.
Dakle, počela sam da radim kao kompjuterska programerka kad sam bila prva godina na fakultetu - u suštini kao tinejdžerka.
Soon after I started working, writing software in a company, a manager who worked at the company came down to where I was, and he whispered to me, "Can he tell if I'm lying?" There was nobody else in the room.
Ubrzo nakon što sam počela da radim, da pišem softvere za firmu, menadžer koji je radio u firmi se spustio do mene i prošaputao mi je: "Zna li da lažem?" Nije bilo više bilo koga u prostoriji.
"Can who tell if you're lying? And why are we whispering?"
"Zna li ko da lažete? I zašto šapućemo?"
The manager pointed at the computer in the room. "Can he tell if I'm lying?" Well, that manager was having an affair with the receptionist.
Menadžer je pokazao na kompjuter u prostoriji. "Zna li da lažem?" Pa, taj menadžer je imao aferu sa recepcionerom.
(Laughter)
(Smeh)
And I was still a teenager. So I whisper-shouted back to him, "Yes, the computer can tell if you're lying."
A ja sam još uvek bila tinejdžerka. Pa sam mu polušapatom odgovorila: "Da, kompjuter zna kad mu lažete."
(Laughter)
(Smeh)
Well, I laughed, but actually, the laugh's on me. Nowadays, there are computational systems that can suss out emotional states and even lying from processing human faces. Advertisers and even governments are very interested.
Pa, smejala sam se, ali zapravo šala je na moj račun. Ovih dana imamo kompjuterske sisteme koji mogu da prozru emocionalna stanja, čak i laganje, obradom ljudskih lica. Oglašivači, čak i vlade su veoma zainteresovane za to.
I had become a computer programmer because I was one of those kids crazy about math and science. But somewhere along the line I'd learned about nuclear weapons, and I'd gotten really concerned with the ethics of science. I was troubled. However, because of family circumstances, I also needed to start working as soon as possible. So I thought to myself, hey, let me pick a technical field where I can get a job easily and where I don't have to deal with any troublesome questions of ethics. So I picked computers.
Postala sam kompjuterska programerka jer sam bila jedno od dece koja su luda za matematikom i naukom. No, nekako sam usput saznala za nuklearno oružje i postala sam zaista zabrinuta zbog naučne etike. Mučilo me je to. Međutim, zbog porodičnih okolnosti, takođe je bilo potrebno da počnem da radim što pre. Pa sam pomislila u sebi, hej, hajde da izaberem tehničku oblast gde mogu lako da se zaposlim i gde ne moram da se bavim bilo kakvim mučnim etičkim pitanjem. Pa sam odabrala kompjutere.
(Laughter)
(Smeh)
Well, ha, ha, ha! All the laughs are on me. Nowadays, computer scientists are building platforms that control what a billion people see every day. They're developing cars that could decide who to run over. They're even building machines, weapons, that might kill human beings in war. It's ethics all the way down.
Pa, ha, ha, ha! Šala je skroz na moj račun. Ovih dana kompjuterski naučnici prave platforme koje upravljaju onim što milijarde ljudi gledaju svakodnevno. Razvijaju automobile koji mogu da odluče koga da pregaze. Čak grade mašine, oružja, koja mogu da ubijaju ljude u ratu. U potpunosti se radi o etici.
Machine intelligence is here. We're now using computation to make all sort of decisions, but also new kinds of decisions. We're asking questions to computation that have no single right answers, that are subjective and open-ended and value-laden.
Mašinska inteligencija je tu. Trenutno koristimo kompjutere da donesemo razne odluke, ali i nove tipove odluka. Postavljamo kompjuterima pitanja koja nemaju jedan pravi odgovor, koja su subjektivna, otvorena i krcata vrednostima.
We're asking questions like, "Who should the company hire?" "Which update from which friend should you be shown?" "Which convict is more likely to reoffend?" "Which news item or movie should be recommended to people?"
Postavljamo ovakva pitanja: "Koga da firma zaposli?" "Koje ažuriranje od kog prijatelja treba da bude vidljivo?" "Koji osuđenik je skloniji novom prestupu?" "Koji novinski čalanak ili film treba preporučiti ljudima?"
Look, yes, we've been using computers for a while, but this is different. This is a historical twist, because we cannot anchor computation for such subjective decisions the way we can anchor computation for flying airplanes, building bridges, going to the moon. Are airplanes safer? Did the bridge sway and fall? There, we have agreed-upon, fairly clear benchmarks, and we have laws of nature to guide us. We have no such anchors and benchmarks for decisions in messy human affairs.
Gledajte, da, već neko vreme koristimo kompjutere, ali ovo je drugačije. Ovo je istorijski preokret jer ne možemo da usidrimo kompjutere kod sličnih subjektivnih odluka na način na koji usidravamo kompjutere koji upravljaju avionima, grade mostove, idu na mesec. Jesu li avioni bezbedniji? Da li se most zaljuljao i pao? Tu imamo uspostavljene prilično jasne repere i tu su zakoni prirode da nas vode. Nemamo slična težišta i repere za odluke koje se tiču haotičnih ljudskih odnosa.
To make things more complicated, our software is getting more powerful, but it's also getting less transparent and more complex. Recently, in the past decade, complex algorithms have made great strides. They can recognize human faces. They can decipher handwriting. They can detect credit card fraud and block spam and they can translate between languages. They can detect tumors in medical imaging. They can beat humans in chess and Go.
Da bi stvari bile još složenije, naši softveri postaju sve moćniji, ali takođe postaju manje transparentni i složeniji. U skorije vreme, u poslednjoj deceniji, složeni algoritmi su poprilično napredovali. Mogu da prepoznaju ljudska lica. Mogu da dešifruju rukopis. Mogu da zapaze prevaru kod kreditnih kartica i da blokiraju spamove i da prevode s jezika na jezik. Mogu da zapaze tumore na medicinskim snimcima. Mogu da pobede ljude u šahu i gou.
Much of this progress comes from a method called "machine learning." Machine learning is different than traditional programming, where you give the computer detailed, exact, painstaking instructions. It's more like you take the system and you feed it lots of data, including unstructured data, like the kind we generate in our digital lives. And the system learns by churning through this data. And also, crucially, these systems don't operate under a single-answer logic. They don't produce a simple answer; it's more probabilistic: "This one is probably more like what you're looking for."
Veliki deo ovog napretka potiče od metoda nazvanog "mašinsko učenje". Mašinsko učenje se razlikuje od tradicionalnog programiranja, gde dajete kompjuteru detaljne, tačne, minuciozne instrukcije. Pre se radi o odabiru sistema i pohranjivanju podataka u njega, uključujući nestrukturirane podatke, poput onih koje stvaramo u digitalnim životima. A sistem uči, pretresajući podatke. Suštinsko je takođe da se ovi sistemi ne vode logikom samo jednog odgovora. Ne proizvode jednostavne odgovore; više se radi o verovatnoći: "Ovo je verovatno sličnije onome što tražite."
Now, the upside is: this method is really powerful. The head of Google's AI systems called it, "the unreasonable effectiveness of data." The downside is, we don't really understand what the system learned. In fact, that's its power. This is less like giving instructions to a computer; it's more like training a puppy-machine-creature we don't really understand or control. So this is our problem. It's a problem when this artificial intelligence system gets things wrong. It's also a problem when it gets things right, because we don't even know which is which when it's a subjective problem. We don't know what this thing is thinking.
Sad, pozitivno je: ovaj metod je zaista moćan. Glavni u Guglovom sistemu za VI je to nazvao: "nerazumna efikasnost podataka." Negativno je: ne razumemo zaista šta je sistem naučio. Zapravo, to je njegova moć. Ovo manje liči na davanje uputstava kompjuteru; više liči na dresiranje bića - mehaničko kuče, koje zaista ne razumemo, niti kontrolišemo. Dakle, to je naš problem. Problem je kad ovaj sistem veštačke inteligencije nešto pogrešno shvati. Takođe je problem kad nešto dobro shvati. jer čak ni ne znamo šta je šta kod subjektivnog problema. Ne znamo o čemu ova stvar razmišlja.
So, consider a hiring algorithm -- a system used to hire people, using machine-learning systems. Such a system would have been trained on previous employees' data and instructed to find and hire people like the existing high performers in the company. Sounds good. I once attended a conference that brought together human resources managers and executives, high-level people, using such systems in hiring. They were super excited. They thought that this would make hiring more objective, less biased, and give women and minorities a better shot against biased human managers.
Dakle, uzmite u obzir algoritam za zapošljavanje - sistem koji se koristi pri zapošljavanju, koji koristi sisteme mašinskog učenja. Sličan sistem je obučavan na podacima prethodnih zaposlenih i naučen je da pronalazi i zapošljava ljude poput postojećih najučinkovitijih u firmi. Zvuči dobro. Jednom sam bila na konferenciji koja je spojila menadžere iz kadrovske službe i direktore, ljude s visokih pozicija, koristeći ove sisteme zapošljavanja. Bili su veoma uzbuđeni. Smatrali su da bi zbog ovoga zapošljavanje bilo objektivnije, nepristrasnije, i da bi žene i manjine imale više šanse, nasuprot pristrasnim ljudskim menadžerima.
And look -- human hiring is biased. I know. I mean, in one of my early jobs as a programmer, my immediate manager would sometimes come down to where I was really early in the morning or really late in the afternoon, and she'd say, "Zeynep, let's go to lunch!" I'd be puzzled by the weird timing. It's 4pm. Lunch? I was broke, so free lunch. I always went. I later realized what was happening. My immediate managers had not confessed to their higher-ups that the programmer they hired for a serious job was a teen girl who wore jeans and sneakers to work. I was doing a good job, I just looked wrong and was the wrong age and gender.
I, gledajte - zapošljavanje ljudi je pristrasno. Znam. Mislim, na jednom od mojih prvih poslova kao programerke, moja nadređena menadžerka bi ponekad prišla mestu na kom sam, veoma rano ujutru ili veoma kasno poslepodne, i rekla bi: "Zejnep, pođimo na ručak!" Zbunilo bi me neobično vreme. Četiri je popodne. Ručak? Bila sam švorc, pa sam uvek išla na besplatan ručak. Kasnije sam shvatila o čemu se radilo. Moji nadređeni menadžeri nisu priznali svojim nadređenim da je programer kog su zaposlili za ozbiljan posao bila tinejdžerka koja je nosila farmerke i patike na posao. Bila sam dobar radnik, samo pogrešnog izgleda i bila sam pogrešnih godina i roda.
So hiring in a gender- and race-blind way certainly sounds good to me. But with these systems, it is more complicated, and here's why: Currently, computational systems can infer all sorts of things about you from your digital crumbs, even if you have not disclosed those things. They can infer your sexual orientation, your personality traits, your political leanings. They have predictive power with high levels of accuracy. Remember -- for things you haven't even disclosed. This is inference.
Pa zapošljavanje na rodno i rasno nepristrasan način izvesno da mi zvuči dobro. Ali uz ove sisteme, složenije je, a evo zašto: trenutno kompjuterski sistemi mogu da zaključe razne stvari o vama iz vaših digitalnih tragova, čak iako to niste obelodanili. Mogu da zaključe vašu seksualnu orijentaciju, vaše lične osobine, vaša politička naginjanja. Imaju moć predviđanja sa visokim stepenom tačnosti. Zapamtite - za ono što čak niste ni obelodanili. To je zaključivanje.
I have a friend who developed such computational systems to predict the likelihood of clinical or postpartum depression from social media data. The results are impressive. Her system can predict the likelihood of depression months before the onset of any symptoms -- months before. No symptoms, there's prediction. She hopes it will be used for early intervention. Great! But now put this in the context of hiring.
Imam prijateljicu koja je razvila sličan kompjuterski sistem za predviđanje verovatnoće kliničke ili postporođajne depresije iz podataka sa društvenih mreža. Rezultati su bili impresivni. Njeni sistemi mogu da predvide verovatnoću depresije mesecima pre nastupa bilo kakvih simptoma - mesecima ranije. Bez simptoma imamo predviđanje. Ona se nada da će biti korišćeni za rane intervencije. Sjajno! Sad ovo stavite u kontekst zapošljavanja.
So at this human resources managers conference, I approached a high-level manager in a very large company, and I said to her, "Look, what if, unbeknownst to you, your system is weeding out people with high future likelihood of depression? They're not depressed now, just maybe in the future, more likely. What if it's weeding out women more likely to be pregnant in the next year or two but aren't pregnant now? What if it's hiring aggressive people because that's your workplace culture?" You can't tell this by looking at gender breakdowns. Those may be balanced. And since this is machine learning, not traditional coding, there is no variable there labeled "higher risk of depression," "higher risk of pregnancy," "aggressive guy scale." Not only do you not know what your system is selecting on, you don't even know where to begin to look. It's a black box. It has predictive power, but you don't understand it.
Pa sam na ovoj konferenciji menadžera iz kadrovske prišla visokoprofilnoj menadžerki iz prilično velike firme, i rekla sam joj: "Pazi, šta ako bi, bez tvog znanja, ovaj sistem iskorenjivao ljude sa velikim izgledima za depresiju u budućnosti? Trenutno nisu depresivni, ali je veća verovatnoća da će biti u budućnosti. Šta ako iskorenjuje žene s većom verovatnoćom da zatrudne u narednih godinu ili dve, ali trenutno nisu trudne? Šta ako zapošljava agresivne ljude jer je to kultura na vašem radnom mestu?" Ovo ne možete da vidite, posmatrajući rodnu nejednakost. Ona bi mogla da bude u ravnoteži. A kako se radi o mašinskom učenju, a ne tradicionalnom programiranju, tu nemamo varijablu s oznakom "veći rizik od depresije", "veći rizik za trudnoću", "skala agresivnih muškaraca". Ne samo da ne znate na osnovu čega vaš sistem bira, čak ne znate ni gde da gledate. To je crna kutija. Ima moć predviđanja, ali je vi ne razumete.
"What safeguards," I asked, "do you have to make sure that your black box isn't doing something shady?" She looked at me as if I had just stepped on 10 puppy tails.
"Koja vam je zaštita", pitala sam, "koju imate kojom se starate da crna kutija ne obavlja nešto sumnjivo?" Pogledala me je kao da sam nagazila na 10 kučećih repića.
(Laughter)
(Smeh)
She stared at me and she said, "I don't want to hear another word about this." And she turned around and walked away. Mind you -- she wasn't rude. It was clearly: what I don't know isn't my problem, go away, death stare.
Buljila je u mene i rekla: "Ne želim da čujem ni reč više o ovome." Okrenula se i otišla. Pazite - nije bila nepristojna. Jasno se radilo o ovome: ono što ne znam nije moj problem, nestani, prazan pogeld.
(Laughter)
(Smeh)
Look, such a system may even be less biased than human managers in some ways. And it could make monetary sense. But it could also lead to a steady but stealthy shutting out of the job market of people with higher risk of depression. Is this the kind of society we want to build, without even knowing we've done this, because we turned decision-making to machines we don't totally understand?
Vidite, sličan sistem bi mogao čak da bude na neki način i manje pristrasan od ljudskih menadžera. I mogao bi da ima finansijsku prednost. Ali bi takođe mogao da dovede do stabilnog, ali prikrivenog isključivanja sa tržišta rada ljudi s većim rizikom od depresije. Da li je ovo oblik društva koji želimo da gradimo, a da pri tom ne znamo da smo to uradili jer smo prepustili donošenje odluka mašinama koje u potpunosti ne razumemo?
Another problem is this: these systems are often trained on data generated by our actions, human imprints. Well, they could just be reflecting our biases, and these systems could be picking up on our biases and amplifying them and showing them back to us, while we're telling ourselves, "We're just doing objective, neutral computation."
Drugi problem je sledeće: ovi sistemi su često obučavani na podacima koje proizvode naša delanja, na ljudskom otisku. Pa, oni bi prosto mogli da odražavaju naše predrasude, i ovi sistemi bi mogli da pokupe naše predrasude i da ih naglase i potom da nam ih pokažu, dok mi govorimo sebi: "Samo izvodimo objektivne, neutralne proračune."
Researchers found that on Google, women are less likely than men to be shown job ads for high-paying jobs. And searching for African-American names is more likely to bring up ads suggesting criminal history, even when there is none. Such hidden biases and black-box algorithms that researchers uncover sometimes but sometimes we don't know, can have life-altering consequences.
Istraživači su otkrili da na Guglu ženama mnogo ređe nego muškarcima prikazuju oglase za dobro plaćene poslove. A pretraga afroameričkih imena često sa sobom povlači oglase koji nagoveštavaju kriminalnu prošlost, čak i kad ona ne postoji. Slične prikrivene predrasude i algoritmi nalik crnoj kutiji, koje istraživači povremeno otkrivaju, ali ponekad to ne uspeju, mogu da imaju ozbiljne posledice.
In Wisconsin, a defendant was sentenced to six years in prison for evading the police. You may not know this, but algorithms are increasingly used in parole and sentencing decisions. He wanted to know: How is this score calculated? It's a commercial black box. The company refused to have its algorithm be challenged in open court. But ProPublica, an investigative nonprofit, audited that very algorithm with what public data they could find, and found that its outcomes were biased and its predictive power was dismal, barely better than chance, and it was wrongly labeling black defendants as future criminals at twice the rate of white defendants.
Okrivljeni iz Viskonsina je osuđen na šest godina zatvora zbog izbegavanja policije. Možda ne znate za ovo, ali algoritme sve češće koriste u odlučivanju o uslovnoj ili kazni. Želeo je da zna: kako su izračunali ovaj rezultat? To je komercijalna crna kutija. Firma je odbila da njen algoritam izazovu na javnom suđenju. No, ProPublica, istraživačka neprofitna organizacija je proverila taj algoritam sa podacima koje su uspeli da nađu i otkrili su da su njihovi rezultati pristrasni, a da je njihova moć predviđanja očajna, jedva bolja od nagađanja i da su pogrešno označavali okrivljene crnce kao buduće kriminalce, dvostruko češće nego okrivljene belce.
So, consider this case: This woman was late picking up her godsister from a school in Broward County, Florida, running down the street with a friend of hers. They spotted an unlocked kid's bike and a scooter on a porch and foolishly jumped on it. As they were speeding off, a woman came out and said, "Hey! That's my kid's bike!" They dropped it, they walked away, but they were arrested.
Pa, razmotrite ovaj slučaj: ova žena je kasnila da pokupi svoje kumče iz okruga Brauard u Floridi, trčala je niz ulicu sa svojom prijateljicom. Spazile su nezaključan dečji bicikl i skuter na tremu i nesmotreno su sele na bicikl. Dok su jurile dalje, žena je izašla i rekla: "Hej! To je bicikl mog deteta!" Ostavile su ga, odšetale, ali su uhapšene.
She was wrong, she was foolish, but she was also just 18. She had a couple of juvenile misdemeanors. Meanwhile, that man had been arrested for shoplifting in Home Depot -- 85 dollars' worth of stuff, a similar petty crime. But he had two prior armed robbery convictions. But the algorithm scored her as high risk, and not him. Two years later, ProPublica found that she had not reoffended. It was just hard to get a job for her with her record. He, on the other hand, did reoffend and is now serving an eight-year prison term for a later crime. Clearly, we need to audit our black boxes and not have them have this kind of unchecked power.
Pogrešila je, bila je nesmotrena, ali je takođe imala svega 18 godina. Imala je nekoliko maloletničkih prekršaja. U međuvremenu, ovaj čovek je uhapšen zbog krađe u supermarketu - robe u vrednosti od 85 dolara, sličan manji zločin. Ali je pre toga imao dve osude zbog oružane pljačke. Ali je algoritam nju ocenio kao visokorizičnu, a njega nije. Dve godine kasnije, ProPublica je otkrila da ona nije imala novih prekršaja. Samo joj je sa dosijeom bilo teško da nađe posao. On, s druge strane, ponovo je u zatvoru i ovaj put služi kaznu od osam godina zbog kasnijeg zločina. Očigledno, moramo da proveravamo naše crne kutije kako ne bi imale sličnu nekontrolisanu moć.
(Applause)
(Aplauz)
Audits are great and important, but they don't solve all our problems. Take Facebook's powerful news feed algorithm -- you know, the one that ranks everything and decides what to show you from all the friends and pages you follow. Should you be shown another baby picture?
Provere su sjajne i važne, ali ne rešavaju sve naše probleme. Uzmite Fejsbukov moćan algoritam za dostavu vesti - znate, onaj koji sve rangira i odlučuje šta da vam pokaže od svih prijatelja i stranica koje pratite. Da li da vam pokaže još jednu sliku bebe?
(Laughter)
(Smeh)
A sullen note from an acquaintance? An important but difficult news item? There's no right answer. Facebook optimizes for engagement on the site: likes, shares, comments.
Sumornu poruku od poznanika? Važnu, ali tešku vest? Nema pravog odgovora. Fejsbuk najviše ima koristi od angažmana na sajtu: sviđanja, deljenja, komentara.
In August of 2014, protests broke out in Ferguson, Missouri, after the killing of an African-American teenager by a white police officer, under murky circumstances. The news of the protests was all over my algorithmically unfiltered Twitter feed, but nowhere on my Facebook. Was it my Facebook friends? I disabled Facebook's algorithm, which is hard because Facebook keeps wanting to make you come under the algorithm's control, and saw that my friends were talking about it. It's just that the algorithm wasn't showing it to me. I researched this and found this was a widespread problem.
Avgusta 2014, izbili su protesti u Fergusonu, Misuri, nakon ubistva afroameričkog tinejdžera od strane policajca belca, pod nejasnim okolnostima. Vesti o protestima su bile svuda po mom algoritamski nefilterisanom Tviter nalogu, ali nigde na mom Fejsbuku. Da li su krivi prijatelji na Fejsbuku? Onemogućila sam Fejsbukov algoritam, a to je teško jer Fejsbuk nastoji da vas natera da budete pod kontrolom algoritma, i videla sam da moji prijatelji govore o tome. Samo mi moj algoritam to nije pokazivao. Istražila sam ovo i otkrila da je ovo raširen problem.
The story of Ferguson wasn't algorithm-friendly. It's not "likable." Who's going to click on "like?" It's not even easy to comment on. Without likes and comments, the algorithm was likely showing it to even fewer people, so we didn't get to see this. Instead, that week, Facebook's algorithm highlighted this, which is the ALS Ice Bucket Challenge. Worthy cause; dump ice water, donate to charity, fine. But it was super algorithm-friendly. The machine made this decision for us. A very important but difficult conversation might have been smothered, had Facebook been the only channel.
Priča o Fergusonu nije bila prihvatljiva za algoritam. Nije nešto za "sviđanje". Ko će da pritisne "sviđanje"? Nije je čak lako ni komentarisati. Bez sviđanja i komentara, algoritam je težio da je prikaže čak i manjem broju ljudi, pa nismo mogli da je vidimo. Umesto toga, te sedmice, Fejsbukov algoritam je izdvojio ovo, a to je ALS ledeni izazov. Plemenit cilj; polij se ledenom vodom, doniraj u dobrotvorne svrhe, fino. Ali bilo je veoma algoritamski prihvatljivo. Mašina je donela ovu odluku u naše ime. Veoma važan, ali težak razgovor je mogao da bude ugušen, da je Fejsbuk bio jedini kanal.
Now, finally, these systems can also be wrong in ways that don't resemble human systems. Do you guys remember Watson, IBM's machine-intelligence system that wiped the floor with human contestants on Jeopardy? It was a great player. But then, for Final Jeopardy, Watson was asked this question: "Its largest airport is named for a World War II hero, its second-largest for a World War II battle."
Sad, naposletku, ovi sistemi takođe mogu da greše drugačije od ljudskih sistema. Ljudi, sećate li se Votsona, IBM-ovog sistema mašinske inteligencije koji je obrisao pod ljudskim takmičarima na kvizu? Bio je sjajan takmičar. Ali su ga onda u finalnom izazovu upitali sledeće pitanje: "Njegov najveći aerodrom je nazvan po heroju iz II svetskog rata, a drugi po veličini po bici iz II svetskog rata."
(Hums Final Jeopardy music)
(Pevuši finalnu temu iz kviza)
Chicago. The two humans got it right. Watson, on the other hand, answered "Toronto" -- for a US city category! The impressive system also made an error that a human would never make, a second-grader wouldn't make.
Čikago. Oba ljudska bića su pogodila. Votson, s druge strane, odgovorio je: "Toronto" - za kategoriju gradova SAD-a. Impresivni sistem je napravio i grešku koju ljudsko biće nikad ne bi, drugaš je nikad ne bi napravio.
Our machine intelligence can fail in ways that don't fit error patterns of humans, in ways we won't expect and be prepared for. It'd be lousy not to get a job one is qualified for, but it would triple suck if it was because of stack overflow in some subroutine.
Naša mašinska inteligencija može da omane na načine koji se ne uklapaju sa obrascima grešenja kod ljudi, na načine koji su neočekivani i na koje nismo pripremljeni. Bilo bi loše ne dobiti posao za koji ste kvalifikovani, ali bi bilo tri puta gore ako bi to bilo zbog preopterećenja u nekakvoj sistemskoj podrutini.
(Laughter)
(Smeh)
In May of 2010, a flash crash on Wall Street fueled by a feedback loop in Wall Street's "sell" algorithm wiped a trillion dollars of value in 36 minutes. I don't even want to think what "error" means in the context of lethal autonomous weapons.
U maju 2010, munjevit krah na Vol Stritu je pokrenut povratnom petljom u Vol Stritovom algoritmu "prodaja", izbrisao je vrednost od trilion dolara za 36 minuta. Ne želim da razmišljam o tome šta znači "greška" u kontekstu smrtonosnog autonomnog oružja.
So yes, humans have always made biases. Decision makers and gatekeepers, in courts, in news, in war ... they make mistakes; but that's exactly my point. We cannot escape these difficult questions. We cannot outsource our responsibilities to machines.
Dakle, da, ljudi su oduvek bili pristrasni. Donosioci odluka i čuvari informacija, na sudovima, vestima, u ratu... oni greše; ali u tome je poenta. Ne možemo da izbegnemo ova teška pitanja. Ne možemo da delegiramo naša zaduženja mašinama.
(Applause)
(Aplauz)
Artificial intelligence does not give us a "Get out of ethics free" card.
Veštačka inteligencija nam ne pruža besplatnu kartu za "beg od etike".
Data scientist Fred Benenson calls this math-washing. We need the opposite. We need to cultivate algorithm suspicion, scrutiny and investigation. We need to make sure we have algorithmic accountability, auditing and meaningful transparency. We need to accept that bringing math and computation to messy, value-laden human affairs does not bring objectivity; rather, the complexity of human affairs invades the algorithms. Yes, we can and we should use computation to help us make better decisions. But we have to own up to our moral responsibility to judgment, and use algorithms within that framework, not as a means to abdicate and outsource our responsibilities to one another as human to human.
Naučnik za podatke Fred Benson to naziva matematičkim ispiranjem. Potrebno nam je suprotno. Moramo da negujemo sumnju u algoritme, nadzor i istraživanje. Moramo da se postaramo da imamo algoritamsku odgovrnost, proveru i smislenu transparentnost. Moramo da prihvatimo da uvođenje matematike i kompjutera u neuredne ljudske odnose vođene vrednostima ne donosi objektivnost; već pre složenost ljudskih odnosa osvaja algoritme. Da, možemo i treba da koristimo kompjutere kako bi donosili bolje odluke. Ali moramo da ovladamo našom moralnom odgovornošću i rasuđivanjem i da koristimo algoritme unutar tog okvira, ne kao sredstva da se odreknemo i da delegiramo naše odgovornosti nekom drugom, kao čovek čoveku.
Machine intelligence is here. That means we must hold on ever tighter to human values and human ethics.
Mašinska inteligencija je tu. To znači da se kao nikad pre moramo čvrsto držati ljudskih vrednosti i ljudske etike.
Thank you.
Hvala vam.
(Applause)
(Aplauz)