You and your partner Alex have been in a strong, loving relationship for years, and lately you're considering getting engaged. Alex is enthusiastic about the idea, but you can’t get over the statistics. You know a lot of marriages end in divorce, often not amicably. And over 10% of couples in their first marriage get divorced within the first five years. If your marriage wouldn’t even last five years, you feel like tying the knot would be a mistake. But you live in the near future, where a brand-new company just released an AI-based model that can predict your likelihood of divorce. The model is trained on data sets containing individuals’ social media activity, online search histories, spending habits, and history of marriage and divorce. And using this information, the AI can predict if a couple will divorce within the first five years of marriage with 95% accuracy. The only catch is the model doesn’t offer any reasons for its results— it simply predicts that you will or won’t divorce without saying why. So, should you decide whether or not to get married based on this AI’s prediction?
你和你的伴侣亚历克斯 多年来一直保持着牢固的恋爱关系, 而且最近你正在考虑订婚。 亚历克斯对此兴高采烈, 但你无法克服对统计数据的不安。 你知道很多婚姻都以离婚告终, 而且往往不是和平的分手。 超过10%的初婚夫妇 在刚结婚的五年内离婚。 如果你的婚姻连五年都维持不了, 那喜结连理就是个错误。 设想一下,你生活在不久的将来, 一家新公司刚刚发布了 一种基于人工智能(AI)的模型, 可以预测你离婚的可能性。 该模型使用数据集进行训练, 数据包括个人社交媒体活动、 在线搜索历史、消费习惯 以及结婚和离婚史。 利用这些信息, AI 可预测一对夫妇是否在婚后的 头五年内离婚,准确率为 95 %。 问题是 AI 不为其结论提供原因, 它只是预测你会否离婚而已。 那么,你应该依据 AI 的预测 来决定是否结婚吗?
Suppose the model predicts you and Alex would divorce within five years of getting married. At this point, you'd have three options. You could get married anyway and hope the prediction is wrong. You could break up now, though there’s no way to know if ending your currently happy relationship would cause more harm than letting the prediction run its course. Or, you could stay together and remain unmarried, on the off-chance marriage itself would be the problem. Though without understanding the reasons for your predicted divorce, you’d never know if those mystery issues would still emerge to ruin your relationship.
假设模型预测你和亚历克斯 将在结婚后的五年内离婚。 此时,你有三个选择。 你可以结婚并指望预测是错的。 你可以现在分手, 尽管没法知道结束目前幸福的关系 是否会比顺其自然造成更大的伤害。 或者,你们可以在一起但保持未婚状态, 以防万一婚姻本身才是问题所在。 不了解为什么被预测离婚, 你就永远不知道 那些神秘的破坏因素是否还会出现, 从而导致你的关系破裂。
The uncertainty undermining all these options stems from a well known issue with AI around explainability and transparency. This problem plagues tons of potentially useful predictive models, such as those that could be used to predict which bank customers are most likely to repay a loan, or which prisoners are most likely to reoffend if granted parole. Without knowing why AI systems reach their decisions, many worry we can’t think critically about how to follow their advice.
这种不确定性会危及以上所有选择, 它源于 AI 在可解释性和透明度方面 一个众所周知的问题。 这个问题困扰着 大量很有潜力的预测模型, 本来可以用来预测,例如,哪些银行客户 最有可能偿还贷款; 或者哪些囚犯假释后最有可能再犯。 因为不清楚 AI 系统如何做出决定, 许多人担心我们会 无法判断如何遵循其建议。
But the transparency problem doesn’t just prevent us from understanding these models, it also impacts the user’s accountability. For example, if the AI's prediction led you to break up with Alex, what explanation could you reasonably offer them? That you want to end your happy relationship because some mysterious machine predicted its demise? That hardly seems fair to Alex. We don’t always owe people an explanation for our actions, but when we do, AI’s lack of transparency can create ethically challenging situations. And accountability is just one of the tradeoffs we make by outsourcing important decisions to AI. If you’re comfortable deferring your agency to an AI model it’s likely because you’re focused on the accuracy of the prediction. In this mindset, it doesn’t really matter why you and Alex might break up— simply that you likely will. But if you prioritize authenticity over accuracy, then you'll need to understand and appreciate the reasons for your future divorce before ending things today. Authentic decision making like this is essential for maintaining accountability, and it might be your best chance to prove the prediction wrong. On the other hand, it’s also possible the model already accounted for your attempts to defy it, and you’re just setting yourself up for failure.
但是,透明度问题不仅阻碍我们 理解这些模型, 它还会影响用户对责任的担当。 例如,如果 AI 的预测 导致你与亚历克斯分手, 你能合理地为他提供什么解释? 你会说你想结束这段幸福的恋情, 因为某个神秘的机器预言了它的消亡? 对亚历克斯来说,这似乎不公平。 我们并不总是需要解释我们的行为, 但是当我们必须解释时, AI 缺乏的透明度会造成 挑战道德底线的情况。 责任担当只是我们 将重要决定外包给 AI 时 所做出的权衡之一。 如果你乐意将决定权让给 AI 模型, 那很可能是因为你专注于预测的准确性。 在这种心态下,你和亚历克斯 分手的原因并不重要 -- 因为高分手几率才是罪魁祸首。 但是,如果你把真实性 看得比准确性更重要, 那么你需要先充分了解 未来离婚的原因,然后再结束现在的关系。 这种真实的决定 对于维持责任担当至关重要, 而且这可能是证明预测错误的良机。 另一方面,该模型也可能 设置了相应程序来对付你的违抗, 所以你可能只是在为自己的失败设局。
95% accuracy is high, but it’s not perfect— that figure means 1 in 20 couples will receive a false prediction. And as more people use this service, the likelihood increases that someone who was predicted to divorce will do so just because the AI predicted they would. If that happens to enough newlyweds, the AI's success rate could be artificially maintained or even increased by these self-fulfilling predictions. Of course, no matter what the AI might tell you, whether you even ask for its prediction is still up to you.
95% 的准确率很高,但并不完美—— 意味着每 20 对夫妇中 就有 1 对被误测。 随着越来越多的人使用这项服务, 越来越多被预测离婚的人 将为了 AI 的预测而去结束婚姻。 如果很多新婚夫妇都这么做, 这些自我实现的预测就可以人为地维持 甚至提高 AI 的成功预测率。 当然,不管 AI 告诉你什么, 甚至你是否求助于它的预测, 一切都取决于你。