In the coming years, artificial intelligence is probably going to change your life, and likely the entire world. But people have a hard time agreeing on exactly how. The following are excerpts from a World Economic Forum interview where renowned computer science professor and AI expert Stuart Russell helps separate the sense from the nonsense.
Nos vindeiros anos, a intelixencia artificial (IA) probablemente vai cambiar a túa vida, e a de todo o mundo. Pero non nos pomos de acordo sobre como o vai facer exactamente O que segue son fragmentos dunha entrevista na que Stuart Russell, coñecido profesor de informática e experto en IA, axuda a separar o verdadeiro do falso.
There’s a big difference between asking a human to do something and giving that as the objective to an AI system. When you ask a human to get you a cup of coffee, you don’t mean this should be their life’s mission, and nothing else in the universe matters. Even if they have to kill everybody else in Starbucks to get you the coffee before it closes— they should do that. No, that’s not what you mean. All the other things that we mutually care about, they should factor into your behavior as well.
Hai unha gran diferenza entre pedirlle a un ser humano que faga algo e marcarlle o mesmo obxectivo a un sistema de IA. Se lle pides a un humano que che traia un café, non significa que esa deba ser a misión da súa vida e que nada máis importe no universo. Que se ten que matar a toda a clientela do Starbucks para conseguirche o café antes do peche, debe facelo. Abofé que non lle dis iso. Todas as outras cousas que nos importan tamén deben influír no noso comportamento.
And the problem with the way we build AI systems now is we give them a fixed objective. The algorithms require us to specify everything in the objective. And if you say, can we fix the acidification of the oceans? Yeah, you could have a catalytic reaction that does that extremely efficiently, but it consumes a quarter of the oxygen in the atmosphere, which would apparently cause us to die fairly slowly and unpleasantly over the course of several hours.
O problema da construción de sistemas de IA na actualidade é que lles damos un obxectivo fixo. Os algoritmos precisan que detallemos todo sobre o obxectivo. E se lle dixésemos: “Podemos solucionar a acidificación dos océanos?” Si, pódese deseñar unha reacción catalítica que o faga de forma eficiente, pero consumiría a cuarta parte do osíxeno da atmosfera, o que nos causaría unha morte lenta e dolorosa no transcurso dunhas poucas horas.
So, how do we avoid this problem? You might say, okay, well, just be more careful about specifying the objective— don’t forget the atmospheric oxygen. And then, of course, some side effect of the reaction in the ocean poisons all the fish. Okay, well I meant don’t kill the fish either. And then, well, what about the seaweed? Don’t do anything that’s going to cause all the seaweed to die. And on and on and on.
Entón, como evitarmos este problema? Poderiamos dicir: “cómpre ser máis coidadoso ao especificar o obxectivo, non esquecer o osíxeno atmosférico”. Pero un efecto secundario da reacción no océano envelena todos os peixes. “Ben, cómpre non matarmos os peixes tampouco”. E entón, que pasa coas algas? “Non fagas nada que cause a morte das algas”. E así unha e outra vez.
And the reason that we don’t have to do that with humans is that humans often know that they don’t know all the things that we care about. If you ask a human to get you a cup of coffee, and you happen to be in the Hotel George Sand in Paris, where the coffee is 13 euros a cup, it’s entirely reasonable to come back and say, well, it’s 13 euros, are you sure you want it, or I could go next door and get one? And it’s a perfectly normal thing for a person to do. To ask, I’m going to repaint your house— is it okay if I take off the drainpipes and then put them back? We don't think of this as a terribly sophisticated capability, but AI systems don’t have it because the way we build them now, they have to know the full objective. If we build systems that know that they don’t know what the objective is, then they start to exhibit these behaviors, like asking permission before getting rid of all the oxygen in the atmosphere.
E se non temos que facer iso cos seres humanos é porque eles saben que non saben todas as cousas que nos preocupan. Se lle pides a un ser humano que che traia unha cunca de café, e cadra que estades no Hotel George V, en París, onde o café custa 13 euros, é moi razoable que a persoa volva e che diga: “Son 13 euros, seguro que o queres? Ou prefires que o merque noutro sitio?” E unha reacción normal de todo nunha persoa. Se alguén vai repintar a túa casa, que che pregunte: “podo quitar as baixantes e despois volvo poñelas?” non nos parece unha capacidade moi sofisticada. Pero os sistemas de IA non a teñen, porque os construímos de xeito que precisan saber o obxectivo completo. Se construímos sistemas de IA conscientes de que ignoran o obxectivo, comezan a manifestar comportamentos tales como pedir permiso antes de eliminar o osíxeno da atmosfera.
In all these senses, control over the AI system comes from the machine’s uncertainty about what the true objective is. And it’s when you build machines that believe with certainty that they have the objective, that’s when you get this sort of psychopathic behavior. And I think we see the same thing in humans.
En todos estes sentidos, controlar un sistema de IA é posible porque a máquina non ten certeza do verdadeiro obxectivo. Porque cando constrúes máquinas que cren que teñen a certeza de coñeceren o obxectivo, é cando se atopan nelas comportamentos psicópatas. E creo que se percibe o mesmo nos seres humanos.
What happens when general purpose AI hits the real economy? How do things change? Can we adapt? This is a very old point. Amazingly, Aristotle actually has a passage where he says, look, if we had fully automated weaving machines and plectrums that could pluck the lyre and produce music without any humans, then we wouldn’t need any workers.
Que ha pasar cando a IA de propósito xeral bata coa economía real? Como cambiarán as cousas? Poderemos adaptarnos? Esta é unha cuestión moi antiga. Aristóteles -quen contaba!- di nun dos seus escritos que de termos máquinas de tecer totalmente automatizadas e plectros que puidesen tocar a lira e producir música sen mediación humana, non cumpriría ter traballadores.
That idea, which I think it was Keynes who called it technological unemployment in 1930, is very obvious to people. They think, yeah, of course, if the machine does the work, then I'm going to be unemployed.
Esta idea, á que Keynes -penso-, lle chamou en 1930 «paro tecnolóxico» é bastante obvia para o gran público. A xente pensa: “abofé, se a máquina fai todo o traballo entón eu vou quedar no paro”.
You can think about the warehouses that companies are currently operating for e-commerce, they are half automated. The way it works is that an old warehouse— where you’ve got tons of stuff piled up all over the place and humans go and rummage around and then bring it back and send it off— there’s a robot who goes and gets the shelving unit that contains the thing that you need, but the human has to pick the object out of the bin or off the shelf, because that’s still too difficult. But, at the same time, would you make a robot that is accurate enough to be able to pick pretty much any object within a very wide variety of objects that you can buy? That would, at a stroke, eliminate 3 or 4 million jobs?
Pensa nos almacéns das grandes empresas de comercio electrónico, que están semiautomatizadas. Nun almacén clásico, hai moreas de cousas amontoadas por todas as partes, seres humanos remexen nelas, traen as mercadorías e envíanas. Agora hai un robot que vai e colle o andel onde está a mercadoría que se busca, pero é o humano quen ten que sacar o obxecto do contedor ou do andel, porque isto segue sendo difícil de máis. Pero, ao mesmo tempo, sería posible crear un robot preciso abondo para recoller case calquera obxecto que se poida mercar? Isto podería eliminar, dunha asentada, 3 ou 4 millóns de postos de traballo?
There's an interesting story that E.M. Forster wrote, where everyone is entirely machine dependent. The story is really about the fact that if you hand over the management of your civilization to machines, you then lose the incentive to understand it yourself or to teach the next generation how to understand it. You can see “WALL-E” actually as a modern version, where everyone is enfeebled and infantilized by the machine, and that hasn’t been possible up to now.
E.M. Foster escribiu un relato fascinante, no que os seres humanos dependen totalmente das máquinas. En realidade, o relato amosa que, se lle deixamos ás máquinas o control da nosa sociedade, perdemos o incentivo entendela por nós mesmos ou para ensinarlle a entendela á seguinte xeración. Poderiamos considerar “WALL-E”, como unha versión moderna en que todo o mundo está debilitado e infantilizado polas máquinas, algo que non fora posible ata o de agora.
We put a lot of our civilization into books, but the books can’t run it for us. And so we always have to teach the next generation. If you work it out, it’s about a trillion person years of teaching and learning and an unbroken chain that goes back tens of thousands of generations. What happens if that chain breaks?
Poñemos moito da nosa civilización nos libros, pero eles non a van xestionar por nós. Debemos seguir instruíndo á vindeira xeración. Levamos arredor dun billón de anos-persoa de ensino e aprendizaxe e unha cadea continua que provén de decenas de miles de xeracións atrás. Que pasa se a cadea rompe?
I think that’s something we have to understand as AI moves forward. The actual date of arrival of general purpose AI— you’re not going to be able to pinpoint, it isn’t a single day. It’s also not the case that it’s all or nothing. The impact is going to be increasing. So with every advance in AI, it significantly expands the range of tasks.
É algo sobre o que debemos reflexionar a medida que a IA avanza. A data de chegada da IA de propósito xeral. non se pode identificar, non vai ser un día concreto. Tampouco vai ser algo de todo ou nada. O seu impacto vai ir aumentando. Con cada avance en IA, amplíase moito o seu abano de tarefas.
So in that sense, I think most experts say by the end of the century, we’re very, very likely to have general purpose AI. The median is something around 2045. I'm a little more on the conservative side. I think the problem is harder than we think.
Por iso, creo que a maioría dos expertos calculan que de aquí a finais de século, con moita probabilidade contaremos con IA de propósito xeral. A media das previsións anda polo 2045. Eu son un pouco máis conservador. Han xurdir máis problemas dos que pensamos
I like what John McAfee, he was one of the founders of AI, when he was asked this question, he said, somewhere between five and 500 years. And we're going to need, I think, several Einsteins to make it happen.
Gústame o que dixo John McAfee, un dos creadores da IA, cando lle fixeron esa pregunta: “nalgunha data entre 5 e 500 anos, e creo que imos precisar varios Einstein para acadalo”.