This is a thought experiment.
Ovo je misaoni eksperiment.
Let's say at some point in the not so distant future, you're barreling down the highway in your self-driving car, and you find yourself boxed in on all sides by other cars. Suddenly, a large, heavy object falls off the truck in front of you. Your car can't stop in time to avoid the collision, so it needs to make a decision: go straight and hit the object, swerve left into an SUV, or swerve right into a motorcycle. Should it prioritize your safety by hitting the motorcycle, minimize danger to others by not swerving, even if it means hitting the large object and sacrificing your life, or take the middle ground by hitting the SUV, which has a high passenger safety rating? So what should the self-driving car do?
Recimo da u nekom trenutku u ne tako dalekoj budućnosti jurite niz auto-put u svom autu sa automatskim upravljanjem i nađete se zarobljeni sa svih strana među drugim automobilima. Odjednom, veliki i težak predmet padne sa kamiona ispred vas. Vaš auto ne može da se zaustavi na vreme da bi izbegao sudar, stoga mora da donese odluku - ići pravo i udariti u predmet, naglo skrenuti ka terencu ili se zaleteti desno u motocikl. Da li treba da da prioritet vašoj bezbednosti i udari motocikl, da umanji opasnost za druge tako što neće skrenuti, čak i ako to znači da će udariti veliki predmet i žrtvovati vaš život, ili da odabere srednje rešenje i udari terenca, koji je visoko ocenjen u pogledu bezbednosti za putnike? Dakle, šta bi auto bez vozača trebalo da uradi?
If we were driving that boxed in car in manual mode, whichever way we'd react would be understood as just that, a reaction, not a deliberate decision. It would be an instinctual panicked move with no forethought or malice. But if a programmer were to instruct the car to make the same move, given conditions it may sense in the future, well, that looks more like premeditated homicide.
Ako bismo mi ručno upravljali tim zarobljenim autom, kako god da reagujemo bilo bi shvaćeno prosto tako kako jeste, kao reakcija, a ne kao promišljena odluka. To bi bio instiktivan potez u panici bez predumišljaja ili loše namere. Međutim, kada bi programer naložio automobilu da učini to isto, pod uslovima koji se mogu javiti u budućnosti, pa, to više izgleda kao ubistvo sa predumišljajem.
Now, to be fair, self-driving cars are predicted to dramatically reduce traffic accidents and fatalities by removing human error from the driving equation. Plus, there may be all sorts of other benefits: eased road congestion, decreased harmful emissions, and minimized unproductive and stressful driving time. But accidents can and will still happen, and when they do, their outcomes may be determined months or years in advance by programmers or policy makers. And they'll have some difficult decisions to make. It's tempting to offer up general decision-making principles, like minimize harm, but even that quickly leads to morally murky decisions.
Sad, da budemo fer, predviđa se da će automobili bez vozača drastično smanjiti broj saobraćajnih nesreća i smrtnih slučajeva tako što će ukloniti ljudsku grešku iz jednačine vožnje. Plus, tu može biti svakojakih drugih pogodnosti: olakšano zakrčenje puteva, smanjeno ispuštanje štetnih gasova i neproduktivno i stresno vreme vožnje svedeno na minimum. Međutim, nesreće se mogu i dalje desiti i dešavaće se, a kada se dogode, njihovi ishodi mogu biti određeni mesecima ili godinama unapred od strane programera ili donosioca zakona. A moraće da donose neke teške odluke. Primamljivo je dati ponudu opštih principa donošenja odluka poput svođenja štete na najmanju meru, ali čak i to ubrzo dovodi do sumnjivih moralnih odluka.
For example, let's say we have the same initial set up, but now there's a motorcyclist wearing a helmet to your left and another one without a helmet to your right. Which one should your robot car crash into? If you say the biker with the helmet because she's more likely to survive, then aren't you penalizing the responsible motorist? If, instead, you save the biker without the helmet because he's acting irresponsibly, then you've gone way beyond the initial design principle about minimizing harm, and the robot car is now meting out street justice.
Na primer, recimo da imamo istu početnu postavku, ali sada se sa vaše leve strane nalazi motociklista sa kacigom i još jedan bez kacige sa vaše desne strane. U koga treba da udari vaš robotski auto? Ako kažete da je to bajker sa kacigom jer je verovatnije da će preživeti, zar onda ne kažnjavate odgovornog motociklistu? Ako umesto toga spasite bajkera bez kacige jer se neodgovorno ponaša, onda ste se udaljili od prvobitnog principa o smanjenju štete i sada robotski auto sprovodi uličnu pravdu.
The ethical considerations get more complicated here. In both of our scenarios, the underlying design is functioning as a targeting algorithm of sorts. In other words, it's systematically favoring or discriminating against a certain type of object to crash into. And the owners of the target vehicles will suffer the negative consequences of this algorithm through no fault of their own.
Etička razmatranja ovde postaju komplikovanija. U oba naša scenarija, dizajn koji se nalazi u osnovi funkcioniše kao neka vrsta algoritma za nalaženje mete. Drugim rečima, sistematski favorizuje ili diskriminiše određenu vrstu predmeta u koji će udariti. A vlasnici vozila koja su postala meta će snositi negativne posledice ovog algoritma iako nisu ni za šta krivi.
Our new technologies are opening up many other novel ethical dilemmas. For instance, if you had to choose between a car that would always save as many lives as possible in an accident, or one that would save you at any cost, which would you buy? What happens if the cars start analyzing and factoring in the passengers of the cars and the particulars of their lives? Could it be the case that a random decision is still better than a predetermined one designed to minimize harm? And who should be making all of these decisions anyhow? Programmers? Companies? Governments?
Naše nove tehnologije otvaraju i mnoge druge nove etičke dileme. Na primer, ako biste morali da odaberete između automobila koji bi uvek spasio što je više moguće života u nesreći, ili automobila koji bi vas spasio po bilo koju cenu, koji biste kupili? Šta bi se desilo ako bi automobil počeo da analizira i uzima u obzir faktore putnika u automobilu i njihove pojedinačne živote? Da li može biti slučaj da je slučajna odluka i dalje bolja od unapred određene koja je osmišljena da se umanji šteta? Ko bi uopšte trebalo da donosi sve te odluke? Programeri? Kompanije?
Reality may not play out exactly like our thought experiments,
Vlade?
but that's not the point. They're designed to isolate and stress test our intuitions on ethics, just like science experiments do for the physical world. Spotting these moral hairpin turns now will help us maneuver the unfamiliar road of technology ethics, and allow us to cruise confidently and conscientiously into our brave new future.
Stvarnost se možda neće odvijati baš kao naši misaoni eksperimenti, ali nije u tome poenta. Oni su osmišljeni da izoluju našu intuiciju vezanu za etiku i podvrgnu je testu o stresu, baš kao što naučni eksperimenti čine kada se radi o fizičkom svetu. Uočavanje ovakvih moralnih krivina sada pomoći će nam u upravljanju nepoznatim putem tehnološke etike i omogućiti nam da uplovimo samopouzdano i savesno