Sociologist Zeynep Tufekci once said that history is full of massive examples of harm caused by people with great power who felt that just because they felt themselves to have good intentions, that they could not cause harm.
社會學家澤內普‧ 圖菲克西曾經說過, 歷史中可以找到很多例子, 都是發生重大的傷害, 而造成傷害的人, 有很大的權力並認為 單單因為他們認為自己有好的意圖, 他們就不可能造成傷害。
In 2017, Rohingya refugees started to flee Myanmar into Bangladesh due to a crackdown by the Myanmar military, an act that the UN subsequently called of genocidal intent. As they started to arrive into camps, they had to register for a range of services. One of this was to register for a government-backed digital biometric identification card. They weren't actually given the option to opt out. In 2021, Human Rights Watch accused international humanitarian agencies of sharing improperly collected information about Rohingya refugees with the Myanmar government without appropriate consent. The information shared didn't just contain biometrics. It contained information about family makeup, relatives overseas, where they were originally from. Sparking fears of retaliation by the Myanmar government, some went into hiding.
2017 年, 羅興亞難民開始從緬甸逃到孟加拉, 原因是緬甸軍方的壓迫,這個行為, 後來被聯合國稱為 是有意大屠殺的行為。 當他們開始抵達難民營, 他們要先登記才能享有許多服務。 其中之一是要登記 政府推動的數位生物識別卡。 他們事實上並沒有 「退出」這個選項可以選。 2021 年「人權觀察」 指控人道主義機構 將透過不當方式收集到的 羅興亞難民資訊分享給 緬甸政府,且沒有取得適當的同意。 被分享的資訊不僅包含 生物識別資訊, 還有其他資訊,如家庭成員、 海外親戚、 他們最初來自何處。 此事造成一些難民去躲起來, 因為害怕被緬甸政府報復。
Targeted identification of persecuted peoples has long been a tactic of genocidal regimes. But now that data is digitized, meaning it is faster to access, quicker to scale and more readily accessible. This was a failure on a multitude of fronts: institutional, governance, moral.
針對被迫害的人做識別, 一直都是要進行種族 滅絕的政權會用的戰略。 但現在, 資料被數位化了, 意味著存取會更快速, 規模也能快速擴大, 且能更即時提供存取。 這是個多方面的失敗:制度面、 政府面、 道德面。
I have spent 15 years of my career working in humanitarian aid. From Rwanda to Afghanistan. What is humanitarian aid, you might ask? In its simplest terms, it's the provision of emergency care to those that need it the most at desperate times. Post-disaster, during a crisis. Food, water, shelter. I have worked within very large humanitarian organizations, whether that's leading multicountry global programs to designing drone innovations for disaster management across small island states. I have sat with communities in the most fragile of contexts, where conversations about the future are the first ones they've ever had. And I have designed global strategies to prepare humanitarian organizations for these same futures. And the one thing I can say is that humanitarians, we have embraced digitalization at an incredible speed over the last decade, moving from tents and water cans, which we still use, by the way, to AI, big data, drones, biometrics. These might seem relevant, logical, needed, even sexy to technology enthusiasts. But what it actually is, is the deployment of untested technologies on vulnerable populations without appropriate consent. And this gives me pause. I pause because the agonies we are facing today as a global humanity didn't just happen overnight. They happened as a result of our shared history of colonialism and humanitarian technology innovations are inherently colonial, often designed for and in the good of groups of people seen as outside of technology themselves, and often not legitimately recognized as being able to provide for their own solutions.
我十五年的職涯都投入在 人道主義救援上。 從盧安達到阿富汗。 各位可能會問, 人道主義救援是什麼? 用白話來說,就是提供緊急照護 給在艱苦時期中最有需要的人。 在災難後,在危機中。 食物、水、庇護所。 我曾經在非常大的 人道主義組織中工作, 工作範圍從領導多國的全球計畫, 到設計無人機的創新, 為小型島嶼國家做災害管理。 我曾經在最脆弱的情況中, 和社區坐下來談, 那還是他們第一次進行 關於未來的談話。 我也曾經設計過一些策略, 來協助人道主義組織準備好 面對這些未來。 我可以確定一件事,人道主義者, 在過去十年間,我們以非常快的速度 擁抱數位化, 從帳篷和水瓶,順道一提, 我們現在仍然在用這些, 進步到人工智慧、大數據、 無人機、生物識別。 熱衷科技的人看來,這些改變 似乎很重要、合邏輯、 勢在必行,甚至很性感。 但它其實是 將尚未測試的技術部署出去, 用在弱勢族群身上, 且沒取得適當的同意。 這讓我猶豫。 我猶豫是因為現今我們全人類 所面臨的痛苦並非一夜形成的。 它們會出現, 是因為我們共有的殖民主義歷史, 而人道主義技術創新 本身就有殖民的本質,在設計上通常 對象是被視為沒有這些 技術的人,是為了他們好, 這些人通常都沒有被正式認可為 有能力自己找出解決方案。
And so, as a humanitarian myself, I ask this question: in our quest to do good in the world, how can we ensure that we do not lock people into future harm, future indebtedness and future inequity as a result of these actions? It is why I now study the ethics of humanitarian tech innovation. And this isn't just an intellectually curious pursuit. It's a deeply personal one. Driven by the belief that it is often people that look like me, that come from the communities I come from, historically excluded and marginalized, that are often spoken on behalf of and denied voice in terms of the choices available to us for our future. As I stand here on the shoulders of all those that have come before me and in obligation for all of those that will come after me to say to you that good intentions alone do not prevent harm, and good intentions alone can cause harm.
所以,我自己身為人道主義者, 我會問:在我們尋求 在世界上做好事的過程中, 我們要如何確保,我們不會 讓這些人在未來受到傷害, 在未來負債,在未來遭遇不公平, 如何不要讓這些行為 造成這樣的結果? 這就是為什麼我現在在研究 人道主義技術創新的倫理。 且這並不只是在理智上 感到好奇而做的消遣。 而是非常個人化的事情。 動機是因為,我相信, 通常都是外觀像我這樣的人、 和我來自類似社區的人、 在歷史上被排除、被邊緣化的人, 通常都被別人代表發言, 聲音被否認,無法為 我們的未來做選擇。 我站在這裡, 站在我所有的前人的肩膀上, 並對我所有的後人盡一份義務, 我要告訴各位, 光是有好的意圖也無法預防傷害, 而光是有好的意圖卻可以造成傷害。
I'm often asked, what do I see ahead of us in this next 21st century? And if I had to sum it up: of deep uncertainty, a dying planet, distrust, pain. And in times of great volatility, we as human beings, we yearn for a balm. And digital futures are exactly that, a balm. We look at it in all of its possibility as if it could soothe all that ails us, like a logical inevitability.
常有人問我,我怎麼看 二十一世紀的未來? 總整來說: 極度的不確定性、邁向死亡的地球、 不信任、痛苦。 而在很不穩定的時期,身為人類, 我們會渴望慰藉。 而數位未來就正是一種慰藉。 因為它有著各種可能性,我們把它 視為可以撫平我們所有的痛苦, 好像在邏輯上是必然的。
In recent years, reports have started to flag the new types of risks that are emerging about technology innovations. One of this is around how data collected on vulnerable individuals can actually be used against them as retaliation, posing greater risk not just against them, but against their families, against their community. We saw these risks become a truth with the Rohingya. And very, very recently, in August 2021, as Afghanistan fell to the Taliban, it also came to light that biometric data collected on Afghans by the US military and the Afghan government and used by a variety of actors were now in the hands of the Taliban. Journalists' houses were searched. Afghans desperately raced against time to erase their digital history online. Technologies of empowerment then become technologies of disempowerment. It is because these technologies are designed on a certain set of societal assumptions, embedded in market and then filtered through capitalist considerations. But technologies created in one context and then parachuted into another will always fail because it is based on assumptions of how people lead their lives. And whilst here, you and I may be relatively comfortable providing a fingertip scan to perhaps go to the movies, we cannot extrapolate that out to the level of safety one would feel while standing in line, having to give up that little bit of data about themselves in order to access food rations. Humanitarians assume that technology will liberate humanity, but without any due consideration of issues of power, exploitation and harm that can occur for this to happen. Instead, we rush to solutionizing, a form of magical thinking that assumes that just by deploying shiny solutions, we can solve the problem in front of us without any real analysis of underlying realities.
近幾年, 報導開始疲乏了, 新型的風險開始浮現, 技術創新的風險。 其中之一是關於 弱勢族群的個人資料被收集到之後 竟可以用來對付他們, 用來報復,造成更大的風險, 對象不只他們,還有他們的家人, 還有他們的社區。 我們在羅興亞難民身上 看到這種風險成真。 沒多久前, 2021 年八月,阿富汗 落入塔利班手中, 也暴露出 阿富汗人的生物識別資料, 原由美國軍方和阿富汗政府 所收集,由各種行為者所使用, 這些資料現在落入塔利班手中。 記者的房子遭到搜索。 阿富汗人拼命和時間賽跑, 要盡快抹除他們在線上的數位足跡。 應該有助益的技術 反而變成了阻礙。 原因是因為,這些技術的設計基礎 有某些特定的社會假設, 嵌入到市場中,接著再經過 資本主義考量的過濾。 但,把一種情境中創造的技術 丟到另一種情境中,一定會失敗。 因為它的基礎假設與這些人 過生活的方式息息相關。 然而,在這裡, 你我可能相對可以 很自在地提供指紋掃瞄, 也許是去看電影時使用, 我們無法把它向外推斷 這些在排隊的難民會感受到 什麼樣的安全程度, 他們得要放棄一點點自己的資料 才能取得日需口糧。 人道主義者假設 技術能解放人類, 但,卻沒有妥善地考量 一些議題,比如權力、 剝削,和傷害, 這些都是會在實現過程中發生的。 反之, 我們急著想提出解決方案, 這可說是一種奇想, 以為只要部署出很炫的解決方案, 就能解決眼前的問題,不需要 對背後的現實面做真正的分析。
These are tools at the end of the day, and tools, like a chef's knife, in the hands of some, the creator of a beautiful meal, and in the hands of others, devastation. So how do we ensure that we do not design the inequities of our past into our digital futures? And I want to be clear about one thing. I'm not anti-tech. I am anti-dumb tech.
到頭來,這些都只是工具, 而工具就像是主廚刀, 在某些人手中, 可以創造出美味的餐點, 在另一些人手中, 會造成毀滅。 所以,我們要如何確保 我們不會把過去的不平等設計到 我們的數位未來中? 有一件事我想說清楚: 我不反對科技。 我反對蠢科技。
(Laughter)
(笑聲)
(Applause)
(掌聲)
The limited imaginings of the few should not colonize the radical re-imaginings of the many.
少數人的有限想像不應該強迫殖民到 多數人的重新想像中。
So how then do we ensure that we design an ethical baseline, so that the liberation that this promises is not just for a privileged few, but for all of us? There are a few examples that can point to a way forward.
那麼,我們要如何確保 我們能設計出倫理基線, 讓這些技術所保證的自由解放 不只有少數有特權的人才能得到, 而是所有人? 有幾個例子可以協助指點未來的路。
I love the work of Indigenous AI that instead of drawing from Western values and philosophies, it draws from Indigenous protocols and values to embed into AI code. I also really love the work of Nia Tero, an Indigenous co-led organization that works with Indigenous communities to map their own well-being and territories as opposed to other people coming in to do it on their behalf. I've learned a lot from the Satellite Sentinel Project back in 2010, which is a slightly different example. The project started essentially to map atrocities through remote sensing technologies, satellites, in order to be able to predict and potentially prevent them. Now the project wound down after a few years for a variety of reasons, one of which being that it couldn’t actually generate action. But the second, and probably the most important, was that the team realized they were operating without an ethical net. And without ethical guidelines in place, it was a very wide open line of questioning about whether what they were doing was helpful or harmful. And so they decided to wind down before creating harm.
我很喜歡 Indigenous AI 所做的, 它參考的不是西方的價值觀和哲學, 而是將原住民的法則和價值觀 嵌入到人工智慧程式碼當中。 我也很喜歡 Nia Tero 所做的, 它是由原住民共同領導的組織, 服務對象是原住民社區, 將他們自己的康樂 和領土繪製在地圖上。 不用讓別人介入進來 代表他們做這些。 我從「衛星哨兵計畫」學到了 很多,那時是 2010 年, 這個例子有一點點不同。 這個計畫一開始 本質上是要將殘暴行為 繪製在地圖上, 靠的是遙感技術、衛星, 目的是要預測這類行為, 有可能還能預防。 幾年後,該計畫的進展慢下來了, 有許多原因, 其中之一,是它無法產生行動。 但,第二個原因, 可能也是最重要的就是 就是這個團隊發現他們是在 沒有倫理網的情況下運作。 若沒有已經就緒的倫理指引, 會有很多可能性,很難判斷 他們的所作所為是會造成助益 還是傷害。 因此,他們決定在造成 傷害之前就先緩下來。
In the absence of legally binding ethical frameworks to guide our work, I have been working on a range of ethical principles to help inform humanitarian tech innovation, and I'd like to put forward a few of these here for you today.
因為沒有 具法律約束力的倫理架構 來引導我們做事, 我一直致力於一系列的倫理原則, 做為人道主義技術創新的基礎, 今天我想在此提出 其中幾項給各位參考。
One: Ask. Which groups of humans will be harmed by this and when? Assess: Who does this solution actually benefit? Interrogate: Was appropriate consent obtained from the end users? Consider: What must we gracefully exit out of to be fit for these futures? And imagine: What future good might we foreclose if we implemented this action today?
一:發問。 哪些人類族群會被 這個解決方案傷害到? 何時? 評估:這個解決方案 實際上能讓誰受益? 質問: 有沒有向終端使用者 取得適當的同意? 考慮: 我們必須要優雅地脫離什麼, 才能符合這些未來? 以及想像: 如果我們現今就實施這個行動, 我們未來會失去什麼益處?
We are accountable for the futures that we create. We cannot absolve ourselves of the responsibilities and accountabilities of our actions if our actions actually cause harm to those that we purport to protect and serve. Another world is absolutely, radically possible.
我們要為我們創造出來的未來負責。 對於我們所採取的行動, 我們自己要扛起責任, 如果我們採取的行動 傷害到了我們號稱 我們要保護、服務的人, 我們就要負責。 這樣的世界,絕對是有可能實現的。
Thank you.
謝謝。
(Applause)
(掌聲)