Sociologist Zeynep Tufekci once said that history is full of massive examples of harm caused by people with great power who felt that just because they felt themselves to have good intentions, that they could not cause harm.
社会学家泽奈普·图费克奇 (Zeynep Tufekci)曾说过, 历史中充斥着这样的案例, 因掌握权势的人认为, 他们只要初衷是好的 就不可能造成伤害, 结果反倒造成了伤害。
In 2017, Rohingya refugees started to flee Myanmar into Bangladesh due to a crackdown by the Myanmar military, an act that the UN subsequently called of genocidal intent. As they started to arrive into camps, they had to register for a range of services. One of this was to register for a government-backed digital biometric identification card. They weren't actually given the option to opt out. In 2021, Human Rights Watch accused international humanitarian agencies of sharing improperly collected information about Rohingya refugees with the Myanmar government without appropriate consent. The information shared didn't just contain biometrics. It contained information about family makeup, relatives overseas, where they were originally from. Sparking fears of retaliation by the Myanmar government, some went into hiding.
2017 年, 罗兴亚族难民由于缅甸军方的镇压, 开始从缅甸逃往孟加拉国, 联合国随后将缅甸军队的行为 称为种族灭绝。 当他们到达难民营时, 难民们需要登记才能享受一系列服务。 其中之一就是 登记一张由政府支持的 数字生物统计识别卡。 他们实际上得被迫登记。 2021 年,人权观察组织谴责 国际人道主义组织们 在未经适当同意的情况下 向缅甸政府分享不当收集的 罗兴亚族难民信息。 他们分享的信息不只是生物统计。 这些信息中包含家庭成员、海外亲属, 以及他们的祖籍地。 这激起了难民们对缅甸政府报复的恐惧, 有些人躲了起来。
Targeted identification of persecuted peoples has long been a tactic of genocidal regimes. But now that data is digitized, meaning it is faster to access, quicker to scale and more readily accessible. This was a failure on a multitude of fronts: institutional, governance, moral.
有针对性地识别被迫害民族的身份 向来是种族灭绝政权的策略。 但是现在数据被数字化了, 意味着人们可以更快获得它, 更快掌握它,它也更容易被获取。 此举在很多层面上都是失败的: 机构层面上,政府层面上, 还有道德层面上。
I have spent 15 years of my career working in humanitarian aid. From Rwanda to Afghanistan. What is humanitarian aid, you might ask? In its simplest terms, it's the provision of emergency care to those that need it the most at desperate times. Post-disaster, during a crisis. Food, water, shelter. I have worked within very large humanitarian organizations, whether that's leading multicountry global programs to designing drone innovations for disaster management across small island states. I have sat with communities in the most fragile of contexts, where conversations about the future are the first ones they've ever had. And I have designed global strategies to prepare humanitarian organizations for these same futures. And the one thing I can say is that humanitarians, we have embraced digitalization at an incredible speed over the last decade, moving from tents and water cans, which we still use, by the way, to AI, big data, drones, biometrics. These might seem relevant, logical, needed, even sexy to technology enthusiasts. But what it actually is, is the deployment of untested technologies on vulnerable populations without appropriate consent. And this gives me pause. I pause because the agonies we are facing today as a global humanity didn't just happen overnight. They happened as a result of our shared history of colonialism and humanitarian technology innovations are inherently colonial, often designed for and in the good of groups of people seen as outside of technology themselves, and often not legitimately recognized as being able to provide for their own solutions.
我从事了 15 年的人道主义援助工作, 你可能会问,什么是人道主义援助? 用最简单的话来说,就是在极端情况下 向最需要的人提供急救护理。 在灾后或危机期间, 提供食物、饮用水、庇护所。 我曾在很大的人道主义组织工作过, 无论是领导跨国全球项目 到设计创新无人机 用于小岛国家的灾害管理。 我曾和处于最脆弱环境的人们坐在一起, 第一次和他们谈起未来。 我曾设计过全球战略, 让人道主义组织准备好 迎接相同的未来。 我能说的是, 人道主义者们,在过去的十年, 我们在以惊人的速度拥抱数字化。 从帐篷和水罐, 顺带一提,我们现在还在使用它们, 到人工智能,大数据, 无人机,生物数字, 这些看起来是息息相关的、 符合逻辑的、必要的, 对于技术爱好者们来说, 甚至是“性感”的。 但事实上,未经测试的技术 被用于脆弱的群体, 而没有得到合适的同意。 这让我停了下来。 我停了下来,是因为我们今日 作为一个全球社会所面临的痛苦 并不是一蹴而就的。 它们的来源是我们 共同拥有的殖民主义历史, 人道主义创新技术 在本质上是殖民主义的, 它们是为被视为 技术之外的人群所设计的, 目的是为了他们的福祉。 这些技术常常不被合法认定 能够提供自己的解决方案。
And so, as a humanitarian myself, I ask this question: in our quest to do good in the world, how can we ensure that we do not lock people into future harm, future indebtedness and future inequity as a result of these actions? It is why I now study the ethics of humanitarian tech innovation. And this isn't just an intellectually curious pursuit. It's a deeply personal one. Driven by the belief that it is often people that look like me, that come from the communities I come from, historically excluded and marginalized, that are often spoken on behalf of and denied voice in terms of the choices available to us for our future. As I stand here on the shoulders of all those that have come before me and in obligation for all of those that will come after me to say to you that good intentions alone do not prevent harm, and good intentions alone can cause harm.
所以,作为一个人道主义者, 我要提出这个问题: 当我们在世上行善的过程中, 我们怎样才能保证我们的这些行为 不会让人们陷入未来的 危害、债务和不平等? 这就是我为什么现在在研究 人道主义创新技术的伦理问题。 这不仅仅是源于求知欲的探索。 这和我的个人经历息息相关。 我被这种信念驱使着: 像我一样的人, 来自我的社区的人, 在历史上被排挤和边缘化的人, 他们的声音往往被他人所取代, 并且这些被否定的声音 其实是对我们的未来有用选择。 当我站在这么多前人的肩膀上 并且对我的后来者们负责, 想对你们说,只有善意是不能阻止伤害的, 只有善意反而会导致伤害。
I'm often asked, what do I see ahead of us in this next 21st century? And if I had to sum it up: of deep uncertainty, a dying planet, distrust, pain. And in times of great volatility, we as human beings, we yearn for a balm. And digital futures are exactly that, a balm. We look at it in all of its possibility as if it could soothe all that ails us, like a logical inevitability.
我经常被问到,我在接下来的 21 世纪 看到了什么是领先于我们的呢? 如果我总结一下的话 是极度的不确定性,奄奄一息的星球, 缺乏信任和痛苦。 在动荡的时代, 作为人类,我们渴望得到慰藉。 而数字化未来就是一种慰藉。 我们考虑它的所有可能性 仿佛它能抚慰我们所有的痛苦, 就像逻辑上必然成立一样。
In recent years, reports have started to flag the new types of risks that are emerging about technology innovations. One of this is around how data collected on vulnerable individuals can actually be used against them as retaliation, posing greater risk not just against them, but against their families, against their community. We saw these risks become a truth with the Rohingya. And very, very recently, in August 2021, as Afghanistan fell to the Taliban, it also came to light that biometric data collected on Afghans by the US military and the Afghan government and used by a variety of actors were now in the hands of the Taliban. Journalists' houses were searched. Afghans desperately raced against time to erase their digital history online. Technologies of empowerment then become technologies of disempowerment. It is because these technologies are designed on a certain set of societal assumptions, embedded in market and then filtered through capitalist considerations. But technologies created in one context and then parachuted into another will always fail because it is based on assumptions of how people lead their lives. And whilst here, you and I may be relatively comfortable providing a fingertip scan to perhaps go to the movies, we cannot extrapolate that out to the level of safety one would feel while standing in line, having to give up that little bit of data about themselves in order to access food rations. Humanitarians assume that technology will liberate humanity, but without any due consideration of issues of power, exploitation and harm that can occur for this to happen. Instead, we rush to solutionizing, a form of magical thinking that assumes that just by deploying shiny solutions, we can solve the problem in front of us without any real analysis of underlying realities.
近年来,有报告开始指出 技术创新正在出现的新类型的风险。 其中之一是,收集到的弱势群体的数据 实际上可能被用于报复, 不仅对他们本人,而且对他们的家庭 和社区构成更大的风险。 我们看到这些风险 在罗兴亚人身上变成了事实。 最近,在 2021 年 8 月, 随着阿富汗落入塔利班手中, 此外,美国军方和阿富汗政府收集 并被各种行为者使用的 阿富汗人生物特征数据, 现在已落入塔利班手中。 记者的住所也被搜查。 阿富汗人拼命与时间赛跑, 在网上抹去他们的数字历史。 赋予权力的技术就变成了剥夺权力的技术。 这是因为这些技术 是在一定的社会假设基础上设计的, 被嵌入到市场中, 然后被资本主义的考虑过滤掉。 但技术在一个环境中创造出来, 然后被空降至另一个环境中 是永远难以成功的, 因为它是基于对人们如何生活的假设。 比如对我们而言,为了看一场电影, 你和我可能会 相对坦然地提供指纹扫描, 但我们不能由此推断, 另一个人在排队领救济食物时 选择放弃 关于自己的一点点数据 内心的安全感。 人道主义者想当然地认为技术会解放人类, 但却没有充分考虑到这种情况下, 可能出现的权力、剥削和伤害等问题。 相反,我们急于解决问题, 这是一种神奇的思维形式, 认为只要部署光鲜的解决方案, 我们就可以解决眼前的问题, 而无需对潜在的现实进行任何真正的分析。
These are tools at the end of the day, and tools, like a chef's knife, in the hands of some, the creator of a beautiful meal, and in the hands of others, devastation. So how do we ensure that we do not design the inequities of our past into our digital futures? And I want to be clear about one thing. I'm not anti-tech. I am anti-dumb tech.
因为这些都是工具, 这些工具,就如同厨师的刀, 在一些人的手中,是一顿美餐的创造者, 而在另一些人的手中会带来灾难。 那么,我们如何确保 我们不会将过去的不平等 设计到我们的数字未来中呢? 我想澄清一件事。 我并不反对科技,我反对愚蠢的科技。
(Laughter)
(笑声)
(Applause)
(掌声)
The limited imaginings of the few should not colonize the radical re-imaginings of the many.
少数人的有限想象不应该成为 多数人激进的重新想象的殖民地。
So how then do we ensure that we design an ethical baseline, so that the liberation that this promises is not just for a privileged few, but for all of us? There are a few examples that can point to a way forward.
那么,我们如何确保我们 设计了一个道德底线, 使它所承诺的解放不只是为少数特权阶层, 而是为我们所有人? 有几个例子可以指明前进的方向。
I love the work of Indigenous AI that instead of drawing from Western values and philosophies, it draws from Indigenous protocols and values to embed into AI code. I also really love the work of Nia Tero, an Indigenous co-led organization that works with Indigenous communities to map their own well-being and territories as opposed to other people coming in to do it on their behalf. I've learned a lot from the Satellite Sentinel Project back in 2010, which is a slightly different example. The project started essentially to map atrocities through remote sensing technologies, satellites, in order to be able to predict and potentially prevent them. Now the project wound down after a few years for a variety of reasons, one of which being that it couldn’t actually generate action. But the second, and probably the most important, was that the team realized they were operating without an ethical net. And without ethical guidelines in place, it was a very wide open line of questioning about whether what they were doing was helpful or harmful. And so they decided to wind down before creating harm.
我喜欢本土 AI 的工作, 它没有借鉴西方的价值观和哲学, 而是借鉴了本土的协议和价值观, 并将其嵌入到 AI 代码中。 我也非常喜欢 Nia Tero 的工作, 一个土著共同领导的组织,与土著社区合作, 绘制他们自己的福祉和领土, 而不是其他人来代表他们做这件事。 我从 2010 年的卫星哨兵项目中学到了很多, 这是一个略有不同的例子。 该项目最初主要是通过遥感技术 和卫星绘制暴行地图, 以便能够预测并潜在地预防它们。 由于各种原因, 这个项目在几年后结束了, 其中一个原因是它无法产生实际行动。 但第二点,可能也是最重要的一点, 是该团队意识到他们在没有 道德底线的情况下运作。 在没有道德准则的情况下, 他们所做的事情是有益的,还是有害的, 这是一个非常大的问题。 所以他们决定在造成 伤害之前停下来。
In the absence of legally binding ethical frameworks to guide our work, I have been working on a range of ethical principles to help inform humanitarian tech innovation, and I'd like to put forward a few of these here for you today.
在缺乏具有法律约束力的道德框架 来指导我们的工作的情况下, 我一直在研究一系列道德原则, 以帮助为人道主义技术创新提供信息, 今天我想在这里向你们提出其中的一些原则。
One: Ask. Which groups of humans will be harmed by this and when? Assess: Who does this solution actually benefit? Interrogate: Was appropriate consent obtained from the end users? Consider: What must we gracefully exit out of to be fit for these futures? And imagine: What future good might we foreclose if we implemented this action today?
第一,先问问。 哪些人类群体将会因此受到 伤害并且在何时受到伤害? 其次,去评估: 这个解决方案实际上对谁有利? 然后,得讯问: 是否得到了最终用户的适当同意? 再者,去思考:为了适应这些未来, 我们必须优雅地退出什么? 最后,想象一下:如果我们今天就采取行动, 我们可能会丧失什么样的未来利益?
We are accountable for the futures that we create. We cannot absolve ourselves of the responsibilities and accountabilities of our actions if our actions actually cause harm to those that we purport to protect and serve. Another world is absolutely, radically possible.
我们要对自己创造的未来负责。 如果我们的行为确实 对那些我们声称要保护和服务的人 造成了伤害, 我们就不能免除自己的责任和责任。 一个全新的世界是完全可能的。
Thank you.
谢谢大家。
(Applause)
(掌声)