I work on helping computers communicate about the world around us. There are a lot of ways to do this, and I like to focus on helping computers to talk about what they see and understand. Given a scene like this, a modern computer-vision algorithm can tell you that there's a woman and there's a dog. It can tell you that the woman is smiling. It might even be able to tell you that the dog is incredibly cute. I work on this problem thinking about how humans understand and process the world. The thoughts, memories and stories that a scene like this might evoke for humans. All the interconnections of related situations. Maybe you've seen a dog like this one before, or you've spent time running on a beach like this one, and that further evokes thoughts and memories of a past vacation, past times to the beach, times spent running around with other dogs. One of my guiding principles is that by helping computers to understand what it's like to have these experiences, to understand what we share and believe and feel, then we're in a great position to start evolving computer technology in a way that's complementary with our own experiences.
我致力于协助电脑和 我们周围世界的沟通。 是有很多方法可以做到这一点, 我喜欢专注于协助电脑 去谈论它们看到和理解的内容。 鉴于这样的情景, 一个现代的计算机视觉演算法, 可以告诉你,有一个女人, 还有一只狗。 它可以告诉你,那个女人在微笑。 它甚至可以告诉你, 这只狗非常可爱。 我处理这个问题 思考人类如何理解和与世界共处。 那些思想,记忆和故事 在这样的场景中, 可能会唤起人类的注意。 所有关连情况的相互联系。 也许你以前见过这样的狗, 或者你曾经花时间, 在这样的沙滩上跑步, 并进一步唤起 过去假期的记忆和想法, 以前去海滩的时候, 花在与其他狗儿, 跑来跑去的时间。 我的指导原则之一, 是通过帮助电脑了解 这是什么样的经历, 从而了解我们所相信的 和感受的共通点, 那么我们就有能力开始 不断发展计算机技术, 以一种与我们经验互补的方式。
So, digging more deeply into this, a few years ago I began working on helping computers to generate human-like stories from sequences of images. So, one day, I was working with my computer to ask it what it thought about a trip to Australia. It took a look at the pictures, and it saw a koala. It didn't know what the koala was, but it said it thought it was an interesting-looking creature. Then I shared with it a sequence of images about a house burning down. It took a look at the images and it said, "This is an amazing view! This is spectacular!" It sent chills down my spine. It saw a horrible, life-changing and life-destroying event and thought it was something positive. I realized that it recognized the contrast, the reds, the yellows, and thought it was something worth remarking on positively. And part of why it was doing this was because most of the images I had given it were positive images. That's because people tend to share positive images when they talk about their experiences. When was the last time you saw a selfie at a funeral?
因此,深入挖掘这一点, 我几年前开始致力于帮助电脑 产生类似人类的故事, 从图像序列。 所以,有一天, 我正在用电脑工作时,询问它 对澳大利亚之行的看法。 它看了看图片, 看到一只树袋熊。 它不知道树袋熊是什么, 但电脑表示它认为树袋熊 看起来是很有趣的生物。 然后我与电脑分享一系列 关于房屋烧毁的图像。 电脑看了一下图片,它说: 「这是个惊人的景观! 这很壮观!」 它使我的脊背发冷。 电脑看到一个可怕的, 改变生活和毁灭生命的事件 并认为这是积极的事情。 我意识到电脑认识到 红色和黄色的对比, 并认为这是值得积极评价的事情。 部分原因是因为 我输入电脑的大部分 是积极的图像。 那是因为人们谈论自己的经历时, 倾向于分享积极的图像。 你上次在葬礼上看到自拍照 是什么时候?
I realized that, as I worked on improving AI task by task, dataset by dataset, that I was creating massive gaps, holes and blind spots in what it could understand. And while doing so, I was encoding all kinds of biases. Biases that reflect a limited viewpoint, limited to a single dataset -- biases that can reflect human biases found in the data, such as prejudice and stereotyping. I thought back to the evolution of the technology that brought me to where I was that day -- how the first color images were calibrated against a white woman's skin, meaning that color photography was biased against black faces. And that same bias, that same blind spot continued well into the '90s. And the same blind spot continues even today in how well we can recognize different people's faces in facial recognition technology. I though about the state of the art in research today, where we tend to limit our thinking to one dataset and one problem. And that in doing so, we were creating more blind spots and biases that the AI could further amplify.
我意识到在改进人工智能的过程中, 是任务归任务,数据集归数据集, 在电脑的理解上, 创造出巨大的差距,缺陷和盲点。 在这样做的同时, 我正在为各种偏见编码。 偏见反映有限观点, 源于一个数据集—— 其中就反映着人类同样的, 比如成见和刻板印象。 我回想起那技术的发展, 让我达到那天我所处的境地—— 第一张彩色的图像 是针对一位白人女性的 皮肤颜色进行校准的, 这意味着彩色摄影对黑脸有偏差。 而同样的偏差,那个盲点 继续带入了 90 年代。 同样的盲点即使在今天, 仍然存在于在面部识别技术应用中, 怎样辨识不同人物的脸。 在今天的研究中, 我想到了最先进的技术, 都倾向于将我们的想法, 限制在一个数据集和一个问题上。 而这样做,我们正在创造 更多的盲点和偏见, 会在使用人工智能时 被进一步放大。
I realized then that we had to think deeply about how the technology we work on today looks in five years, in 10 years. Humans evolve slowly, with time to correct for issues in the interaction of humans and their environment. In contrast, artificial intelligence is evolving at an incredibly fast rate. And that means that it really matters that we think about this carefully right now -- that we reflect on our own blind spots, our own biases, and think about how that's informing the technology we're creating and discuss what the technology of today will mean for tomorrow.
那时我意识到我们必须深思, 我们今天发明创造技术, 在五年到十年之后会怎样被看待 。 在人类与环境互动作用中, 人类用时间纠正问题, 所以进化缓慢。 人工智能相比之下,正在以 令人难以置信的速度发展。 这意味着它确实很重要, 我们现在要仔细考虑这一点 —— 反思自己的盲点, 及偏见, 并考虑这些偏见是如何影响 我们现在创造的技术, 并讨论今天的技术, 对未来意味着什么。
CEOs and scientists have weighed in on what they think the artificial intelligence technology of the future will be. Stephen Hawking warns that "Artificial intelligence could end mankind." Elon Musk warns that it's an existential risk and one of the greatest risks that we face as a civilization. Bill Gates has made the point, "I don't understand why people aren't more concerned." But these views -- they're part of the story. The math, the models, the basic building blocks of artificial intelligence are something that we call access and all work with. We have open-source tools for machine learning and intelligence that we can contribute to. And beyond that, we can share our experience. We can share our experiences with technology and how it concerns us and how it excites us. We can discuss what we love. We can communicate with foresight about the aspects of technology that could be more beneficial or could be more problematic over time.
CEO和科学家, 已经权衡了他们的想法, 关于未来的人工智能发展。 斯蒂芬·霍金警告说: 「人工智能会使人类灭亡。」 伊隆‧马斯克警告 这是一种存在的风险, 也是我们作为文明社会; 要面临的最大风险之一。 比尔‧盖茨指出: 「我不明白为什么人们 对人工智能不更忧虑。」 但是这些观点—— 只是故事的一部分。 那数学,模型, 这些人工智能的基本组成部分, 是我们都可以取得和并使用的。 我们有向大众开放的源代码工具 来学习机器, 并同时作出自己的贡献。 除此之外,我们也可以 分享我们的经验。 我们可以分享在技术方面 及其与我们的关系, 和如何令我们雀跃的地方。 我们可以讨论我们所爱的东西。 我们可以与预见的将来进行沟通, 关于技术方面这可能会更有益, 或随着时间的推移, 可能会出现更多的问题。
If we all focus on opening up the discussion on AI with foresight towards the future, this will help create a general conversation and awareness about what AI is now, what it can become and all the things that we need to do in order to enable that outcome that best suits us. We already see and know this in the technology that we use today. We use smart phones and digital assistants and Roombas. Are they evil? Maybe sometimes. Are they beneficial? Yes, they're that, too. And they're not all the same. And there you already see a light shining on what the future holds. The future continues on from what we build and create right now. We set into motion that domino effect that carves out AI's evolutionary path.
如果我们都专注于 开放对于人工智能的讨论 展望未来, 这将有助于创造一个 常规的对话和意识, 关于人工智能是什么? 它能成为什么? 以及我们需要做的所有事情, 以实现最适合我们的结果。 我们已经在今天使用的技术中 看到和了解这一点。 我们使用智能手机,数码助理 和自动吸尘器。 它们邪恶吗? 也许有时是。 他们有益吗? 是的,他们也是。 它们并不完全相同。 在那里你已经看到了未来的光芒。 未来将继续从我们现在 建立和创造的东西开始。 我们启动了多米诺骨牌效应, 这就揭开了人工智能的进化通道
In our time right now, we shape the AI of tomorrow. Technology that immerses us in augmented realities bringing to life past worlds. Technology that helps people to share their experiences when they have difficulty communicating. Technology built on understanding the streaming visual worlds used as technology for self-driving cars. Technology built on understanding images and generating language, evolving into technology that helps people who are visually impaired be better able to access the visual world. And we also see how technology can lead to problems. We have technology today that analyzes physical characteristics we're born with -- such as the color of our skin or the look of our face -- in order to determine whether or not we might be criminals or terrorists. We have technology that crunches through our data, even data relating to our gender or our race, in order to determine whether or not we might get a loan. All that we see now is a snapshot in the evolution of artificial intelligence. Because where we are right now, is within a moment of that evolution. That means that what we do now will affect what happens down the line and in the future.
在我们的时代,塑造了 明天的人工智能。 让我们能沉浸在增强现实的技术中, 使过去的世界复活, 当人们沟通有困难时, 科技就帮助他们分享彼此的经验。 建立于在线视觉媒体的科技, 可被用在汽车自动驾驶上。 科技基于图像的理解而产生语言, 能演变成协助视障人士的技术, 帮助他们更好地拥有视觉世界。 我们也看到科技 如何导致一些问题。 我们今天有科技 分析我们出生的身体特征 —— 比如我们皮肤的颜色 还是我们脸上的表情 以确定我们是否罪犯或恐怖分子。 我们拥有处理数据的技术, 处理关于性别或种族的数据, 以确定我们是否可以获得贷款。 我们现在看到的所有东西, 只是人工智能演变过程中的 快照。 因为我们现在所处的地方, 是演变中的一个时刻。 这意味着我们现在所做的, 将会影响事情的往后发展, 并延至未来的世界。
If we want AI to evolve in a way that helps humans, then we need to define the goals and strategies that enable that path now. What I'd like to see is something that fits well with humans, with our culture and with the environment. Technology that aids and assists those of us with neurological conditions or other disabilities in order to make life equally challenging for everyone. Technology that works regardless of your demographics or the color of your skin. And so today, what I focus on is the technology for tomorrow and for 10 years from now.
如果我们希望人工智能 能协助人类的方式进化, 我们就需要确定策略和目标, 马上开通那条路径。 我想看到的是适合人类的 文化和环境的发展方向。 科技能帮助我们治愈神经系统疾病 或其它残疾的患者, 让他们与每个人一样, 让生活同样具有挑战性。 科技的运作 不会考量你的特征 或皮肤颜色。 我今天关注的是 明日和十年后的科技,
AI can turn out in many different ways. But in this case, it isn't a self-driving car without any destination. This is the car that we are driving. We choose when to speed up and when to slow down. We choose if we need to make a turn. We choose what the AI of the future will be. There's a vast playing field of all the things that artificial intelligence can become. It will become many things. And it's up to us now, in order to figure out what we need to put in place to make sure the outcomes of artificial intelligence are the ones that will be better for all of us.
人工智能可以以许多不同的方式出现。 但在这种情况下, 它并不是没有任何目的地的 无人驾驶车。 这是我们能驾驶同时控制的汽车。 我们选择何时加速和何时减速。 我们选择是否需要转弯。 我们选择未来的人工智能会是什么。 会有一个广阔的竞技场。 容许人工智能可以成为所有的东西。 它会变成很多不同的东西。 现在取决于 我们要弄清楚所需要实施的 以确保人工智能的结果 是对所有人类都会更好。
Thank you.
谢谢。
(Applause)
(掌声)