This is a thought experiment.
這是一個假想的實驗
Let's say at some point in the not so distant future, you're barreling down the highway in your self-driving car, and you find yourself boxed in on all sides by other cars. Suddenly, a large, heavy object falls off the truck in front of you. Your car can't stop in time to avoid the collision, so it needs to make a decision: go straight and hit the object, swerve left into an SUV, or swerve right into a motorcycle. Should it prioritize your safety by hitting the motorcycle, minimize danger to others by not swerving, even if it means hitting the large object and sacrificing your life, or take the middle ground by hitting the SUV, which has a high passenger safety rating? So what should the self-driving car do?
假設在不久將來的某時 你坐在無人駕駛車內 高速行駛於高速公路上 你發現你的四周都有車 突然前面卡車有大型重物掉落 你無法及時煞車以避免衝撞 所以必須做個決定: 直行撞擊那重物、 轉向左邊撞運動休旅車、 或轉向右邊撞摩托車 它應該以你的安全 為優先考慮去撞摩托車 還是應該不要轉向,以減少傷害他人 即使這意謂著直撞那重物 犧牲你的生命 或是取折中方式去撞運動休旅車 因為它有較高的乘客安全等級? 無人駕駛車應該怎麼做呢?
If we were driving that boxed in car in manual mode, whichever way we'd react would be understood as just that, a reaction, not a deliberate decision. It would be an instinctual panicked move with no forethought or malice. But if a programmer were to instruct the car to make the same move, given conditions it may sense in the future, well, that looks more like premeditated homicide.
如果我們是徒手駕駛那輛 被夾在車陣中的車子 無論我們做出那種反應 都將被視為 一種生理反應 而非經深思熟慮後的決定 那是一種本能的驚慌舉動 不是預先策劃或蓄意的 但如果是程式設計師下指令 讓車子做出同樣的動作 ─ 針對它未來可能感應到的狀況 ─ 這就比較像是預謀殺人了
Now, to be fair, self-driving cars are predicted to dramatically reduce traffic accidents and fatalities by removing human error from the driving equation. Plus, there may be all sorts of other benefits: eased road congestion, decreased harmful emissions, and minimized unproductive and stressful driving time. But accidents can and will still happen, and when they do, their outcomes may be determined months or years in advance by programmers or policy makers. And they'll have some difficult decisions to make. It's tempting to offer up general decision-making principles, like minimize harm, but even that quickly leads to morally murky decisions.
平心而論 無人駕駛車是預計來 大幅降低意外車禍 及死亡事故 因可經由駕駛程式來去除人為疏失 況且尚有其他許多好處 疏解道路壅塞 減少排放有害廢氣 及減少無效率及緊張的開車時間 但意外還是仍然會發生 當發生時 它們的結果可能早在數月 或數年前就已經預先 被程式設計師或決策者所決定了 他們將要做出一些困難的決定 試圖提出一般性決策原則 例如「傷害最小化」 即使如此,它仍很快變成 道德上含糊不清的決定
For example, let's say we have the same initial set up, but now there's a motorcyclist wearing a helmet to your left and another one without a helmet to your right. Which one should your robot car crash into? If you say the biker with the helmet because she's more likely to survive, then aren't you penalizing the responsible motorist? If, instead, you save the biker without the helmet because he's acting irresponsibly, then you've gone way beyond the initial design principle about minimizing harm, and the robot car is now meting out street justice.
例如 比方說我們遇到和原先相同的狀況 但現在有個戴安全帽的 摩托車騎士在你左邊 而另有一位沒戴安全帽的在你右邊 你的自動車應該撞向那邊? 如果你說是那位戴安全帽的騎士 因為她存活率比較高 那麼你不就是在懲罰 一位守法的摩托車騎士呢? 反之,你說撞向那位沒戴安全的騎士 因為他不負責任 那麼你這做法已經遠離 「傷害最小化」的最初設計原則了 而那自動車現在做出懲罰 執行道路正義了
The ethical considerations get more complicated here. In both of our scenarios, the underlying design is functioning as a targeting algorithm of sorts. In other words, it's systematically favoring or discriminating against a certain type of object to crash into. And the owners of the target vehicles will suffer the negative consequences of this algorithm through no fault of their own.
道德的考量在此變得更複雜 這兩種情況 其基本設計是一種 「標靶運算法則」的運作 換言之 它蓄意地厚此薄彼 撞向一個特定物體 那位被選中的車主 因這運算的負面結果而遭殃了 而這並不是他們自己的錯
Our new technologies are opening up many other novel ethical dilemmas. For instance, if you had to choose between a car that would always save as many lives as possible in an accident, or one that would save you at any cost, which would you buy? What happens if the cars start analyzing and factoring in the passengers of the cars and the particulars of their lives? Could it be the case that a random decision is still better than a predetermined one designed to minimize harm? And who should be making all of these decisions anyhow? Programmers? Companies? Governments?
我們的新技術出現了 許多其他新的道德難題 例如,如果你必須二選一時 一輛意外時會有最多存活率的車 或是一輛不計一切代價先救你的車 你會買那一輛? 如果車子開始分析並考慮 乘客及他們的身份地位 你想會發生什麼事? 會不會一個隨機的決定 仍優於一個「傷害最小化」 的預定程式呢? 到底應該由誰來做所有的決定? 程式設計師? 公司?
Reality may not play out exactly like our thought experiments,
政府?
but that's not the point. They're designed to isolate and stress test our intuitions on ethics, just like science experiments do for the physical world. Spotting these moral hairpin turns now will help us maneuver the unfamiliar road of technology ethics, and allow us to cruise confidently and conscientiously into our brave new future.
實際情形也許不會和我們 假想實驗的情況完全相同 但重點不在這兒 它們被設計來單獨在壓力下 測試我們對道德的直覺 就如同真實世界的科學實驗一樣 發現這些道德急轉彎 將協助我們駕馭在這條 陌生的 “技術倫理學” 路上 讓我們有信心地、有良知地 駛向我們新的美好未來