When I was a kid, I was the quintessential nerd. I think some of you were, too.
Cando era un neno, era o típico friqui. Creo que algúns de vostedes tamén.
(Laughter)
(risos)
And you, sir, who laughed the loudest, you probably still are.
E vostede, señor, o que ri máis alto, pode que aínda o sexa.
(Laughter)
(risos)
I grew up in a small town in the dusty plains of north Texas, the son of a sheriff who was the son of a pastor. Getting into trouble was not an option. And so I started reading calculus books for fun.
Medrei nunha cidade pequena nas poirentas chairas do norte de Texas, son fillo dun sheriff que era fillo dun pastor. así que mellor non meterme en problemas. Por tanto comecei a ler libros de cálculo por divertimento.
(Laughter)
(risos)
You did, too. That led me to building a laser and a computer and model rockets, and that led me to making rocket fuel in my bedroom. Now, in scientific terms, we call this a very bad idea.
Vostede tamén. Iso levoume a construir un láser, un computador, cohetes, e a facer combustible para foguetes no meu cuarto. Agora, en termos científicos, a iso chámamoslle: unha moi mala idea.
(Laughter)
(risos)
Around that same time, Stanley Kubrick's "2001: A Space Odyssey" came to the theaters, and my life was forever changed. I loved everything about that movie, especially the HAL 9000. Now, HAL was a sentient computer designed to guide the Discovery spacecraft from the Earth to Jupiter. HAL was also a flawed character, for in the end he chose to value the mission over human life. Now, HAL was a fictional character, but nonetheless he speaks to our fears, our fears of being subjugated by some unfeeling, artificial intelligence who is indifferent to our humanity.
Ao mesmo tempo, estreábase <i>2001, unha odisea no espazo</i> de Stanley Kubrick, e a miña vida cambiaba para sempre. Encantoume todo desa película, sobre todo HAL 9000. HAL era un computador consciente deseñado para guiar o Discovery da Terra a Xúpiter. HAL era ademais un personaxe con defectos, xa que ao final lle deu máis importancia á misión ca á vida humana. Agora, HAL pertence á ficción pero aínda así fala dos nosos medos a sermos dominados por unha intelixencia artificial (IA) sen sentimentos que é indiferente á nosa humanidade.
I believe that such fears are unfounded. Indeed, we stand at a remarkable time in human history, where, driven by refusal to accept the limits of our bodies and our minds, we are building machines of exquisite, beautiful complexity and grace that will extend the human experience in ways beyond our imagining.
Creo que eses medos non teñen fundamento. De feito, estamos nunha importante época para a historia humana na que, por negarnos a aceptar os límites dos nosos corpos e mentes, estamos construíndo máquinas de exquisita beleza, complexidade e graza que ampliarán a experiencia humana de formas que nin imaxinamos.
After a career that led me from the Air Force Academy to Space Command to now, I became a systems engineer, and recently I was drawn into an engineering problem associated with NASA's mission to Mars. Now, in space flights to the Moon, we can rely upon mission control in Houston to watch over all aspects of a flight. However, Mars is 200 times further away, and as a result it takes on average 13 minutes for a signal to travel from the Earth to Mars. If there's trouble, there's not enough time. And so a reasonable engineering solution calls for us to put mission control inside the walls of the Orion spacecraft. Another fascinating idea in the mission profile places humanoid robots on the surface of Mars before the humans themselves arrive, first to build facilities and later to serve as collaborative members of the science team.
Despois de pasar da Academia da Forza Aérea ao Mando Espacial da Forza Aérea, pasei a enxeñeiro de sistemas e hai pouco metéronme nun problema de enxeñaría relacionado coa misión da NASA a Marte. Agora, nos voos espaciais á Lúa, dependemos do control de Houston para vixiar todos os aspectos do voo. Porén Marte está 200 veces máis lonxe e por tanto un sinal tarda arredor de 13 minutos en viaxar dende a Terra ata alí. Se hai algún problema, é demasiado tempo. Unha solución razoable pasa por incluír o control da misión dentro das paredes da nave espacial Orión. Outra idea fascinante do perfil da misión sitúa a robots humanoides na superficie de Marte antes ca os propios humanos, primeiro para construír infraestruturas e despois para serviren como colaboradores do equipo científico.
Now, as I looked at this from an engineering perspective, it became very clear to me that what I needed to architect was a smart, collaborative, socially intelligent artificial intelligence. In other words, I needed to build something very much like a HAL but without the homicidal tendencies.
Desde a perspectiva da enxeñaría, quedoume moi claro que o que necesitaba deseñar era unha intelixencia artificial esperta, colaborativa e socialmente intelixente. É dicir, precisaba construír algo moi parecido a HAL pero sen as súas tendencias homicidas.
(Laughter)
(risos)
Let's pause for a moment. Is it really possible to build an artificial intelligence like that? Actually, it is. In many ways, this is a hard engineering problem with elements of AI, not some wet hair ball of an AI problem that needs to be engineered. To paraphrase Alan Turing, I'm not interested in building a sentient machine. I'm not building a HAL. All I'm after is a simple brain, something that offers the illusion of intelligence.
Deteñámonos un momento. É posible construírmos unha intelixencia artificial coma esa? Pois, si. Por moitas razóns, este é un difícil problema de enxeñaría cos elementos da lA, non só unhas pedriñas no camiño que deben ser tratadas pola enxeñaría. Parafraseando a Alan Turing, Non quero construír unha máquina con sentimentos. Non constrúo un HAL. Só busco un cerebro simple, algo que ofrece a ilusión da intelixencia.
The art and the science of computing have come a long way since HAL was onscreen, and I'd imagine if his inventor Dr. Chandra were here today, he'd have a whole lot of questions for us. Is it really possible for us to take a system of millions upon millions of devices, to read in their data streams, to predict their failures and act in advance? Yes. Can we build systems that converse with humans in natural language? Yes. Can we build systems that recognize objects, identify emotions, emote themselves, play games and even read lips? Yes. Can we build a system that sets goals, that carries out plans against those goals and learns along the way? Yes. Can we build systems that have a theory of mind? This we are learning to do. Can we build systems that have an ethical and moral foundation? This we must learn how to do. So let's accept for a moment that it's possible to build such an artificial intelligence for this kind of mission and others.
A arte e a ciencia informática percorreron un longo camiño dende que vimos a HAL, e creo que se o Dr. Chandra, o seu inventor, estivese aquí hoxe, tería moitas preguntas que facernos. Énos posible coller un sistema de millóns e millóns de dispositivos, medir os seus datos, predicir os seus fallos e anticiparnos? É. Podemos crear sistemas que falen nunha linguaxe natural? Podemos. Podemos crear sistemas que recoñezan obxectos, identifiquen e expresen emocións, xoguen, lean os beizos? Podemos. Podemos crear un sistema que fixe metas, cree plans para acadalas e de paso aprenda? Podemos. Podemos construír sistemas que teñan unha teoría da mente? Isto é o que estamos a aprender a facer. Podemos construír un sistema que teña principios éticos e morais? Isto é o que debemos aprender. Aceptemos por un momento que podemos construír esta intelixencia artificial para este e outro tipo de misións.
The next question you must ask yourself is, should we fear it? Now, every new technology brings with it some measure of trepidation. When we first saw cars, people lamented that we would see the destruction of the family. When we first saw telephones come in, people were worried it would destroy all civil conversation. At a point in time we saw the written word become pervasive, people thought we would lose our ability to memorize. These things are all true to a degree, but it's also the case that these technologies brought to us things that extended the human experience in some profound ways.
A seguinte pregunta que se deben facer é: deberiamos de lle ter medo? Todas as novas tecnoloxías traen consigo certo grao de inquietude. A primeira vez que a xente viu un coche, queixábase de que ía destruír as familias. Cando vimos por primeira vez os teléfonos, preocupábanos que acabasen coas conversas civilizadas. Nalgún momento cremos que a escritura se volvería ubicua, que se perdería a capacidade de memorizar. Todo isto é verdade ata certo punto, pero tamén o é que estas tecnoloxías nos brindaron avances que ampliaron a experiencia humana de formas moi profundas.
So let's take this a little further. I do not fear the creation of an AI like this, because it will eventually embody some of our values. Consider this: building a cognitive system is fundamentally different than building a traditional software-intensive system of the past. We don't program them. We teach them. In order to teach a system how to recognize flowers, I show it thousands of flowers of the kinds I like. In order to teach a system how to play a game -- Well, I would. You would, too. I like flowers. Come on. To teach a system how to play a game like Go, I'd have it play thousands of games of Go, but in the process I also teach it how to discern a good game from a bad game. If I want to create an artificially intelligent legal assistant, I will teach it some corpus of law but at the same time I am fusing with it the sense of mercy and justice that is part of that law. In scientific terms, this is what we call ground truth, and here's the important point: in producing these machines, we are therefore teaching them a sense of our values. To that end, I trust an artificial intelligence the same, if not more, as a human who is well-trained.
Así que sigamos un pouco máis. Non me dá medo a creación dunha IA coma esta, xa que, co tempo, encarnará algúns dos nosos valores. Tendo isto en conta: construír un sistema cognitivo é distinto a crear un anticuado sistema baseado en software intensivo. Non os programamos, aprendémoslles. Para lles aprendermos a como recoñeceren as flores, amosaríalles moitas flores das miñas variedades preferidas. Para aprenderlles como xogar a un xogo... Eu faríao, e vós tamén. De verdade que me gustan as flores. Para ensinarlle a un sistema a xogar a un xogo coma Go faríao xogar milleiros de partidas. E no proceso aprenderíalle a distinguir unha boa partida dunha mala. Se quixese crear unha IA coa función de asistente legal aprenderíalle un corpus legal ao mesmo tempo que lle introduciría o sentido da misericordia e xustiza que é parte da lei. En termos científicos chámaselles verdades fundamentais e o máis importante: ao producirmos estas máquinas estámoslles inculcando os nosos valores. Chegados a este punto, confío nunha IA tanto, se non máis, ca nun humano ben adestrado.
But, you may ask, what about rogue agents, some well-funded nongovernment organization? I do not fear an artificial intelligence in the hand of a lone wolf. Clearly, we cannot protect ourselves against all random acts of violence, but the reality is such a system requires substantial training and subtle training far beyond the resources of an individual. And furthermore, it's far more than just injecting an internet virus to the world, where you push a button, all of a sudden it's in a million places and laptops start blowing up all over the place. Now, these kinds of substances are much larger, and we'll certainly see them coming.
Preguntarédesvos que pasa cos axentes corruptos, con algunhas ONG ben financiadas? Unha IA en mans dun lobo solitario non me pon medo. Non nos podemos autoprotexer de todo acto aleatorio de violencia pero a realidade é que tal sistema precisa dun adestramento substancial e sutil moi por riba das capacidades dun único individuo. E ademais é máis complicado ca inxectarmos un virus informático en todo o mundo. Premendo un botón o virus chega a milleiros de lugares e os portátiles empezan a estoupar. Agora, este tipo de contidos son moito máis grandes e vémolos vir.
Do I fear that such an artificial intelligence might threaten all of humanity? If you look at movies such as "The Matrix," "Metropolis," "The Terminator," shows such as "Westworld," they all speak of this kind of fear. Indeed, in the book "Superintelligence" by the philosopher Nick Bostrom, he picks up on this theme and observes that a superintelligence might not only be dangerous, it could represent an existential threat to all of humanity. Dr. Bostrom's basic argument is that such systems will eventually have such an insatiable thirst for information that they will perhaps learn how to learn and eventually discover that they may have goals that are contrary to human needs. Dr. Bostrom has a number of followers. He is supported by people such as Elon Musk and Stephen Hawking. With all due respect to these brilliant minds, I believe that they are fundamentally wrong. Now, there are a lot of pieces of Dr. Bostrom's argument to unpack, and I don't have time to unpack them all, but very briefly, consider this: super knowing is very different than super doing. HAL was a threat to the Discovery crew only insofar as HAL commanded all aspects of the Discovery. So it would have to be with a superintelligence. It would have to have dominion over all of our world. This is the stuff of Skynet from the movie "The Terminator" in which we had a superintelligence that commanded human will, that directed every device that was in every corner of the world. Practically speaking, it ain't gonna happen. We are not building AIs that control the weather, that direct the tides, that command us capricious, chaotic humans. And furthermore, if such an artificial intelligence existed, it would have to compete with human economies, and thereby compete for resources with us. And in the end -- don't tell Siri this -- we can always unplug them.
Teño medo a que unha IA deste tipo poida ameazar a toda a humanidade? Se ves películas como <i>Matrix</i>, <i>Metrópolis</i>, <i>Terminator</i>, series como <i>Westworld</i> todas tratan este tipo de medos. De feito, no libro <i>Superintelixencia</i> do filósofo Nick Bostrom este trata o mesmo tema e opina que unha superintelixencia non só podería ser perigosa podería representar unha ameaza para toda a humanidade. O argumento do Dr. Bostrom é que estes sistemas van chegar a ter nun momento tal insaciable fame de información que pode que aprendan a como aprender e poden chegar a descubrir que teñen obxectivos contrarios ás necesidades humanas. O Dr. Bostrom ten seguidores. É apoiado por xente como Elon Musk ou Stephen Hawking. Con todo o meu respecto por estas mentes prodixiosas, creo que se equivocan. Hai moitas partes da argumentación do Dr. Bostrom que analizar e agora non teño tempo pero de forma moi breve, direi isto: a supersabedoría é moi diferente da superrealización. HAL era unha ameaza para a tripulación do Discovery só se tomaba o control de toda a nave. O mesmo pasaría cunha superintelixencia. Tería que ter o dominio de todo o mundo. Isto é o que pasa coa Skynet de <i>Terminator</i> onde temos unha superintelixencia que dirixe a vontade humana e cada dispositivo situado en calquera recuncho do mundo. Falando dende a práctica isto non vai pasar. Non estamos deseñando IA que controlen o clima, que dirixan as mareas, nin a nós, os caprichosos e caóticos humanos. Ademais, de existir unha intelixencia artificial así tería que competir coas economías humanas e competir con nós polos recursos. E á fin e ao cabo (non llo digan a Siri) podemos desconectalos.
(Laughter)
(risos)
We are on an incredible journey of coevolution with our machines. The humans we are today are not the humans we will be then. To worry now about the rise of a superintelligence is in many ways a dangerous distraction because the rise of computing itself brings to us a number of human and societal issues to which we must now attend. How shall I best organize society when the need for human labor diminishes? How can I bring understanding and education throughout the globe and still respect our differences? How might I extend and enhance human life through cognitive healthcare? How might I use computing to help take us to the stars?
Estamos nunha incrible viaxe de coevolución xunto coas nosas máquinas. Os humanos que somos agora non somos os que seremos daquela. Preocuparse agora polo auxe dunha superintelixencia é unha distracción perigosa xa que o auxe da informática en si xa dá lugar a algúns asuntos humanos e sociais aos que temos que atender. Como se pode organizar a sociedade cando diminúe a necesidade de capital humano? Como podo achegarlle compresión e educación a todo o mundo respectando as nosas diferenzas? Como podo mellorar a vida das persoas a través da medicina cognitiva? Como podo utilizar a informática para que cheguemos ás estrelas?
And that's the exciting thing. The opportunities to use computing to advance the human experience are within our reach, here and now, and we are just beginning.
E iso é o excitante. As oportunidades de usar a informática para que progrese a experiencia humana están ao noso alcance aquí e agora e isto é só o comezo.
Thank you very much.
Moitas grazas.
(Applause)
(Aplausos)