I work with a bunch of mathematicians, philosophers and computer scientists, and we sit around and think about the future of machine intelligence, among other things. Some people think that some of these things are sort of science fiction-y, far out there, crazy. But I like to say, okay, let's look at the modern human condition. (Laughter) This is the normal way for things to be.
Ja radim sa grupom matematičara, filozofa i informatičara i mi, između ostalog, sedimo i razmišljamo o budućnosti mašinske inteligencije. Neki misle da su neke od ovih stvari kao naučna fantastika, uvrnute, lude. Ali ja volim da kažem: pogledajmo kako izgleda moderni čovek. (Smeh) Normalno je da bude ovako.
But if we think about it, we are actually recently arrived guests on this planet, the human species. Think about if Earth was created one year ago, the human species, then, would be 10 minutes old. The industrial era started two seconds ago. Another way to look at this is to think of world GDP over the last 10,000 years, I've actually taken the trouble to plot this for you in a graph. It looks like this. (Laughter) It's a curious shape for a normal condition. I sure wouldn't want to sit on it. (Laughter)
Ali ako razmislimo, mi smo, kao ljudska vrsta, skoro pristigli gosti na ovoj planeti. Zamislite da je Zemlja nastala pre godinu dana. Ljudska vrsta bi onda bila stara 10 minuta, a industrijsko doba je počelo pre dve sekunde. Drugi način da predstavimo ovo jeste da pogledamo svetski BDP u poslednjih 10.000 godina. Potrudio sam se da vam ovo predstavim grafički. To izgleda ovako. (Smeh) U normalnoj situaciji, to je neobičan oblik. Sigurno ne bih voleo da sednem na njega.
Let's ask ourselves, what is the cause of this current anomaly? Some people would say it's technology. Now it's true, technology has accumulated through human history, and right now, technology advances extremely rapidly -- that is the proximate cause, that's why we are currently so very productive. But I like to think back further to the ultimate cause.
(Smeh) Zapitajmo se: šta je razlog ove savremene anomalije? Neki bi rekli da je to tehnologija. Istina je, tehnologija se nagomilala tokom ljudske istorije, a sada napreduje ekstremno brzo. To je neposredan uzrok, zbog toga smo toliko produktivni. Ali želeo bih da se vratim unazad do prvobitnog uzroka.
Look at these two highly distinguished gentlemen: We have Kanzi -- he's mastered 200 lexical tokens, an incredible feat. And Ed Witten unleashed the second superstring revolution. If we look under the hood, this is what we find: basically the same thing. One is a little larger, it maybe also has a few tricks in the exact way it's wired. These invisible differences cannot be too complicated, however, because there have only been 250,000 generations since our last common ancestor. We know that complicated mechanisms take a long time to evolve. So a bunch of relatively minor changes take us from Kanzi to Witten, from broken-off tree branches to intercontinental ballistic missiles.
Pogledajte ova dva cenjena gospodina. Imamo Kanzija - on je ovladao sa 200 leksičkih simbola, što je neverovatan podvig. A Ed Viten je pokrenuo drugu revoluciju superstruna. Ako pogledamo ispod haube, pronaći ćemo ovo: u osnovi istu stvar. Jedna je malo veća i možda ima par trikova u načinu na koji je povezana. Ove nevidljive razlike ne mogu biti previše komplikovane, jer je prošlo samo 250.000 generacija od našeg poslednjeg zajedničkog pretka. Znamo da komplikovanim mehanizmima treba mnogo vremena da se razviju. Tako da skup relativno malih promena nas od Kanzija dovodi do Vitena, od slomljenih grana drveća, do interkontinentalnih raketa.
So this then seems pretty obvious that everything we've achieved, and everything we care about, depends crucially on some relatively minor changes that made the human mind. And the corollary, of course, is that any further changes that could significantly change the substrate of thinking could have potentially enormous consequences.
Onda izgleda prilično očigledno da sve što smo postigli i sve što nam je važno zavisi od nekih relativno malih promena koje su stvorile ljudski um. I onda, logično, sve dalje promene koje značajno menjaju osnovu mišljenja mogle bi imati potencijalno ogromne posledice.
Some of my colleagues think we're on the verge of something that could cause a profound change in that substrate, and that is machine superintelligence. Artificial intelligence used to be about putting commands in a box. You would have human programmers that would painstakingly handcraft knowledge items. You build up these expert systems, and they were kind of useful for some purposes, but they were very brittle, you couldn't scale them. Basically, you got out only what you put in. But since then, a paradigm shift has taken place in the field of artificial intelligence.
Neke moje kolege misle da se nalazimo nadomak nečega što bi moglo da prouzrokuje duboku promenu u toj osnovi, a to je mašinska superinteligencija. Veštačka inteligencija je dosad bila kutija u koju ste ubacivali komande. Ljudski programeri su brižljivo brusili kolekcije znanja. Razvijali su te ekspertske sisteme i oni su bili korisni za neke namene, ali su bili vrlo krhki, niste mogli da ih podešavate. U suštini, dobijali ste samo ono što ste već imali ubačeno. Ali od tada, desila se promena paradigme na polju veštačke inteligencije.
Today, the action is really around machine learning. So rather than handcrafting knowledge representations and features, we create algorithms that learn, often from raw perceptual data. Basically the same thing that the human infant does. The result is A.I. that is not limited to one domain -- the same system can learn to translate between any pairs of languages, or learn to play any computer game on the Atari console. Now of course, A.I. is still nowhere near having the same powerful, cross-domain ability to learn and plan as a human being has. The cortex still has some algorithmic tricks that we don't yet know how to match in machines.
Danas se najviše radi na mašinskom učenju. Ne kreiramo imitacije znanja i njegove komponente, već kreiramo algoritme koji uče iz sirovih opažajnih podataka. U suštini, isto kao što to radi novorođenče. Rezultat je veštačka inteligencija (V.I.) koja nije ograničena na jednu oblast - isti sistem može da nauči da prevodi između bilo koja dva jezika ili da nauči da igra bilo koju igricu na Atariju. Naravno, V.I. je još uvek daleko od toga da može kao i ljudi da uči i planira između više oblasti istovremeno. Korteks ima neke algoritamske trikove koje još uvek ne znamo kako da postignemo kod mašina.
So the question is, how far are we from being able to match those tricks? A couple of years ago, we did a survey of some of the world's leading A.I. experts, to see what they think, and one of the questions we asked was, "By which year do you think there is a 50 percent probability that we will have achieved human-level machine intelligence?" We defined human-level here as the ability to perform almost any job at least as well as an adult human, so real human-level, not just within some limited domain. And the median answer was 2040 or 2050, depending on precisely which group of experts we asked. Now, it could happen much, much later, or sooner, the truth is nobody really knows.
Tako da, pitanje je: koliko smo daleko od toga da to postignemo? Pre nekoliko godina anketirali smo neke vodeće stručnjake za V.I. da vidimo šta oni misle i jedno od naših pitanja je bilo: "Do koje godine mislite da postoji 50% šanse da ćemo kod mašina postići inteligenciju koja je na nivou ljudske?" Definisali smo "ljudski nivo" kao sposobnost da se obavi skoro svaki posao onako kao što bi to uradila odrasla osoba. Znači, na pravom ljudskom nivou, a ne u nekoj ograničenoj oblasti. I srednji odgovor je bio: oko 2040. ili 2050, u zavisnosti od toga koju smo tačno grupu stručnjaka pitali. Moglo bi da se desi i mnogo, mnogo kasnije, ili ranije. U stvari, niko to ne zna.
What we do know is that the ultimate limit to information processing in a machine substrate lies far outside the limits in biological tissue. This comes down to physics. A biological neuron fires, maybe, at 200 hertz, 200 times a second. But even a present-day transistor operates at the Gigahertz. Neurons propagate slowly in axons, 100 meters per second, tops. But in computers, signals can travel at the speed of light. There are also size limitations, like a human brain has to fit inside a cranium, but a computer can be the size of a warehouse or larger. So the potential for superintelligence lies dormant in matter, much like the power of the atom lay dormant throughout human history, patiently waiting there until 1945. In this century, scientists may learn to awaken the power of artificial intelligence. And I think we might then see an intelligence explosion.
Ono što znamo je da se konačna granica obrade informacija kod mašina nalazi daleko izvan granica biološkog tkiva. Ovo se svodi na fiziku. Biološki neuron radi, možda, na 200 herca, 200 puta u sekundi. Ali čak i današnji tranzistor radi na jednom gigahercu. Neuroni se razmnožavaju sporo u aksonima - najviše 100 m u sekundi. Ali u kompjuteru, signali mogu da putuju brzinom svetlosti. Takođe postoje ograničenja u veličini, kao što ih ima ljudski mozak da bi stao u lobanju, ali kompjuter može biti veličine skladišta ili veći. Tako da potencijal superinteligencije čeka uspavan, kao što je moć atoma bila uspavana tokom ljudske istorije, strpljivo čekajući do 1945. U ovom veku, naučnici možda nauče da probude moć veštačke inteligencije. I mislim da bismo tada mogli da vidimo eksploziju inteligencije.
Now most people, when they think about what is smart and what is dumb, I think have in mind a picture roughly like this. So at one end we have the village idiot, and then far over at the other side we have Ed Witten, or Albert Einstein, or whoever your favorite guru is. But I think that from the point of view of artificial intelligence, the true picture is actually probably more like this: AI starts out at this point here, at zero intelligence, and then, after many, many years of really hard work, maybe eventually we get to mouse-level artificial intelligence, something that can navigate cluttered environments as well as a mouse can. And then, after many, many more years of really hard work, lots of investment, maybe eventually we get to chimpanzee-level artificial intelligence. And then, after even more years of really, really hard work, we get to village idiot artificial intelligence. And a few moments later, we are beyond Ed Witten. The train doesn't stop at Humanville Station. It's likely, rather, to swoosh right by.
Većina ljudi kad pomisli šta je pametno, a šta glupo, mislim da u glavi ima sliku sličnu ovoj. Na jednom kraju imamo idiota, a daleko na drugom kraju imamo Eda Vitena, ili Alberta Ajnštajna, ili nekog vašeg omiljenog gurua. Ali s tačke gledišta veštačke inteligencije, prava slika više izgleda ovako: V.i. počinje u ovoj ovde tački, na nultoj inteligenciji i onda, posle mnogo, mnogo godina vrednog rada, možda konačno stignemo do veštačke inteligencije na nivou miša, koja može da se kreće u zakrčenim prostorima kao što to može miš. I zatim, posle još mnogo godina vrednog rada i mnogo ulaganja, možda konačno stignemo do nivoa šimpanze, a posle još više godina vrednog rada, do nivoa veštačke inteligencije idiota. A nekoliko trenutaka kasnije, pretekli smo Eda Vitena. Voz se neće zaustaviti kad pretekne ljude, već će najverovatnije, samo prošišati pored.
Now this has profound implications, particularly when it comes to questions of power. For example, chimpanzees are strong -- pound for pound, a chimpanzee is about twice as strong as a fit human male. And yet, the fate of Kanzi and his pals depends a lot more on what we humans do than on what the chimpanzees do themselves. Once there is superintelligence, the fate of humanity may depend on what the superintelligence does. Think about it: Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing than we are, and they'll be doing so on digital timescales. What this means is basically a telescoping of the future. Think of all the crazy technologies that you could have imagined maybe humans could have developed in the fullness of time: cures for aging, space colonization, self-replicating nanobots or uploading of minds into computers, all kinds of science fiction-y stuff that's nevertheless consistent with the laws of physics. All of this superintelligence could develop, and possibly quite rapidly.
Ovo ima suštinske posledice, naročito što se tiče pitanja moći. Na primer, šimpanze su snažne - šimpanza je oko dva puta snažnija od čoveka. Pa ipak, sudbina Kanzija i njegovih drugara zavisi više od toga šta mi ljudi radimo, nego od toga šta šimpanze rade. Kad se pojavi superinteligencija, sudbina čovečanstva može zavisiti od toga šta superinteligencija radi. Razmislite: mašinska inteligencija je poslednji izum koji će čovečanstvo morati da napravi. Mašine će tada biti bolje u pronalazaštvu od nas i to će raditi u digitalnim vremenskim rokovima. To u suštini znači skraćivanje budućnosti. Pomislite na svu neverovatnu tehnologiju koju možete da zamislite da bi ljudi mogli da razviju kroz vreme: lek protiv starenja, kolonizaciju svemira, samoreplikaciju nanorobota ili čuvanje umova u kompjuterima, raznorazne stvari iz naučne fantastike koje su ipak u skladu sa zakonima fizike. Sve to bi superinteligencija mogla da napravi, i to verovatno jako brzo.
Now, a superintelligence with such technological maturity would be extremely powerful, and at least in some scenarios, it would be able to get what it wants. We would then have a future that would be shaped by the preferences of this A.I. Now a good question is, what are those preferences? Here it gets trickier. To make any headway with this, we must first of all avoid anthropomorphizing. And this is ironic because every newspaper article about the future of A.I. has a picture of this: So I think what we need to do is to conceive of the issue more abstractly, not in terms of vivid Hollywood scenarios.
Superinteligecija s takvom tehnološkom zrelošću bila bi izuzetno moćna i, barem po nekim scenarijima, mogla bi da dobije šta hoće. Onda bismo imali budućnost koja bi bila oblikovana po želji te V.I. Sada je pitanje: kakve su te želje? Ovde nastaje začkoljica. Da bismo napredovali dalje, moramo prvo da izbegnemo antropomorfnost. To je ironično, jer svaki novinski članak o budućnosti V.I. zamišlja ovo: Mislim da bi trebalo da zamislimo problem apstraktnije, a ne kao živopisni holivudski scenario.
We need to think of intelligence as an optimization process, a process that steers the future into a particular set of configurations. A superintelligence is a really strong optimization process. It's extremely good at using available means to achieve a state in which its goal is realized. This means that there is no necessary connection between being highly intelligent in this sense, and having an objective that we humans would find worthwhile or meaningful.
Moramo da posmatramo inteligenciju kao proces optimizacije, proces koji vodi budućnost u određeni set konfiguracija. Superinteligencija je vrlo snažan proces optimizacije. Izuzetno je dobra u korišćenju dostupnih sredstava da postigne stanje u kome je njen cilj ostvaren. To znači da ne postoji obavezno veza između: biti visoko inteligentan u ovom smislu i imati cilj koji mi ljudi smatramo važnim ili smislenim.
Suppose we give an A.I. the goal to make humans smile. When the A.I. is weak, it performs useful or amusing actions that cause its user to smile. When the A.I. becomes superintelligent, it realizes that there is a more effective way to achieve this goal: take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins. Another example, suppose we give A.I. the goal to solve a difficult mathematical problem. When the A.I. becomes superintelligent, it realizes that the most effective way to get the solution to this problem is by transforming the planet into a giant computer, so as to increase its thinking capacity. And notice that this gives the A.I.s an instrumental reason to do things to us that we might not approve of. Human beings in this model are threats, we could prevent the mathematical problem from being solved.
Pretpostavimo da damo V.I. zadatak da nasmeje ljude. Kada je V.I. slaba, ona izvodi korisne ili zabavne radnje koje prouzrokuju osmeh kod korisnika. Kada V.I. postane superinteligentna, ona shvata da postoji efikasniji način da postigne taj cilj, a to je da preuzme kontrolu nad svetom i prikači elektrode na mišiće ljudskih lica da bi proizvela konstantne kezove. Drugi primer je: pretpostavimo da V. I. damo zadatak da reši težak matetematički problem. Kad V.I. postane superinteligentna, shvati da je najefikasniji način za rešenje ovog problema da transformiše planetu u džinovski kompjuter, kako bi uvećala svoj kapacitet za razmišljanje. To daje veštačkoj inteligenciji instrumentalni razlog da nam radi stvari s kojima se možda ne slažemo. Ljudska bića su pretnja u ovom modelu, mogli bi da budu prepreka za rešavanje matematičkog problema.
Of course, perceivably things won't go wrong in these particular ways; these are cartoon examples. But the general point here is important: if you create a really powerful optimization process to maximize for objective x, you better make sure that your definition of x incorporates everything you care about. This is a lesson that's also taught in many a myth. King Midas wishes that everything he touches be turned into gold. He touches his daughter, she turns into gold. He touches his food, it turns into gold. This could become practically relevant, not just as a metaphor for greed, but as an illustration of what happens if you create a powerful optimization process and give it misconceived or poorly specified goals.
Naravno, stvari neće krenuti loše baš na ovaj način; ovo su primeri iz crtanog filma. Ali ovde je važna poenta: ako kreirate veoma moćan proces optimizacije da se postigne maksimalan učinak za cilj X, onda neka vaša definicija X-a obavezno uključi sve ono do čega vam je stalo. Ovo je lekcija kojoj nas takođe uče i mnogi mitovi. Kralj Mida poželi da sve što dodirne bude pretvoreno u zlato. I tako, dodirne ćerku i ona se pretvori u zlato. Dodirne hranu i ona se pretvori u zlato. To može biti relevantno i u praksi, ne samo kao metafora za pohlepu, već i kao ilustracija za ono što bi se desilo ako pogrešno kreirate moćan proces optimizacije ili loše formulišete ciljeve.
Now you might say, if a computer starts sticking electrodes into people's faces, we'd just shut it off. A, this is not necessarily so easy to do if we've grown dependent on the system -- like, where is the off switch to the Internet? B, why haven't the chimpanzees flicked the off switch to humanity, or the Neanderthals? They certainly had reasons. We have an off switch, for example, right here. (Choking) The reason is that we are an intelligent adversary; we can anticipate threats and plan around them. But so could a superintelligent agent, and it would be much better at that than we are. The point is, we should not be confident that we have this under control here.
Mogli biste reći: pa ako kompjuter počne da kači elektrode ljudima na lica, prosto ćemo ga isključiti. Pod A: to možda ne bi bilo tako lako, ako bismo bili zavisni od sistema - na primer: gde je prekidač za gašenje interneta? Pod B: zašto šimpanze nisu isključile prekidač čovečanstvu, ili Neandertalci? Sigurno ima razloga. Imamo prekidač za gašenje: na primer, ovde. (Gušenje) Razlog je to što smo mi inteligentan neprijatelj; možemo predvideti pretnje i planirati kako da ih izbegnemo. Ali to može i predstavnik superinteligencije, i bio bi mnogo bolji u tome od nas. Poenta je da ne treba da budemo tako sigurni da imamo kontrolu ovde.
And we could try to make our job a little bit easier by, say, putting the A.I. in a box, like a secure software environment, a virtual reality simulation from which it cannot escape. But how confident can we be that the A.I. couldn't find a bug. Given that merely human hackers find bugs all the time, I'd say, probably not very confident. So we disconnect the ethernet cable to create an air gap, but again, like merely human hackers routinely transgress air gaps using social engineering. Right now, as I speak, I'm sure there is some employee out there somewhere who has been talked into handing out her account details by somebody claiming to be from the I.T. department.
Mogli bismo da pokušamo da malo olakšamo posao tako što ćemo, recimo, staviti V.I. u kutiju, kao bezbedno okruženje za softver, simulaciju virtuelne realnosti iz koje ona ne može pobeći. Ali koliko možemo biti sigurni da V.I. neće naći rupu? S obzirom da ljudski hakeri stalno nalaze rupe, ja ne bih bio tako siguran. Isključimo eternet kabl da bismo stvorili vazdušni zid, ali, kao obični ljudski hakeri, rutinski preskačemo vazdušne zidove pomoću socijalnog inženjeringa. Upravo sada, dok govorim, siguran sam da negde postoji neki zaposleni koga neko nagovara da mu da podatke o svom računu, neko ko se predstavlja da je iz IT sektora.
More creative scenarios are also possible, like if you're the A.I., you can imagine wiggling electrodes around in your internal circuitry to create radio waves that you can use to communicate. Or maybe you could pretend to malfunction, and then when the programmers open you up to see what went wrong with you, they look at the source code -- Bam! -- the manipulation can take place. Or it could output the blueprint to a really nifty technology, and when we implement it, it has some surreptitious side effect that the A.I. had planned. The point here is that we should not be confident in our ability to keep a superintelligent genie locked up in its bottle forever. Sooner or later, it will out.
Ima još mogućih kreativnih scenarija, kao na primer, ako ste vi V.I., možete zamisliti elektrode u vašem unutrašnjem sistemu koje stvaraju radio talase koje možete koristiti za komunikaciju. Ili možete da se pretvarate da ste pokvareni i kada vas programeri otvore da vide šta nije u redu, vide izvorni kod i manipulacija može da se desi. Ili može da prenese šemu na pogodan tehnološki uređaj i kada je sprovedemo, ima podrivajući efekat koji je V.I. planirala. Poenta je da treba da budemo sigurni da ćemo držati superinteligentnog duha zaključanog u boci zauvek. Pre ili kasnije će izaći.
I believe that the answer here is to figure out how to create superintelligent A.I. such that even if -- when -- it escapes, it is still safe because it is fundamentally on our side because it shares our values. I see no way around this difficult problem.
Mislim da je odgovor to da shvatimo kako da napravimo V.I. tako da, čak i ako pobegne budemo bezbedni jer je suštinski na našoj strani jer deli naše vrednosti. Ne vidim drugo rešenje za ovaj težak problem.
Now, I'm actually fairly optimistic that this problem can be solved. We wouldn't have to write down a long list of everything we care about, or worse yet, spell it out in some computer language like C++ or Python, that would be a task beyond hopeless. Instead, we would create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of. We would thus leverage its intelligence as much as possible to solve the problem of value-loading.
Ali sam optimista da ovaj problem može da se reši. Ne bismo morali da pišemo dugu listu svega onoga do čega nam je stalo ili, još gore, pišemo to na nekom kompjuterskom jeziku kao što je C++ ili Pajton, što bi bio beznadežan zadatak. Umesto toga, napravili bismo V.I. koja koristi svoju inteligenciju da nauči šta mi vrednujemo i čiji motivacioni sistem je konstruisan tako da je motivisan da sledi naše vrednosti ili obavlja radnje za koje predviđa da ih mi odobravamo. Tako bismo iskoristili njenu inteligenciju što je više moguće da rešimo problem prenošenja vrednosti.
This can happen, and the outcome could be very good for humanity. But it doesn't happen automatically. The initial conditions for the intelligence explosion might need to be set up in just the right way if we are to have a controlled detonation. The values that the A.I. has need to match ours, not just in the familiar context, like where we can easily check how the A.I. behaves, but also in all novel contexts that the A.I. might encounter in the indefinite future.
Ovo može da se desi i ishod bi bio veoma dobar za čovečanstvo. Ali to se ne dešava automatski. Početni uslovi za eksploziju inteligencije možda treba da budu postavljeni na tačno određen način ako želimo da imamo kontrolisanu detonaciju. Vrednosti V.I. tribe da se poklope sa našim, ne samo u poznatim kontekstima, gde možemo lako proveriti kako se V.I. ponaša, nego i u novim kontekstima u kojima se V.I. može naći u neodređenoj budućnosti.
And there are also some esoteric issues that would need to be solved, sorted out: the exact details of its decision theory, how to deal with logical uncertainty and so forth. So the technical problems that need to be solved to make this work look quite difficult -- not as difficult as making a superintelligent A.I., but fairly difficult. Here is the worry: Making superintelligent A.I. is a really hard challenge. Making superintelligent A.I. that is safe involves some additional challenge on top of that. The risk is that if somebody figures out how to crack the first challenge without also having cracked the additional challenge of ensuring perfect safety.
Postoje i neka ezoterična pitanja koja bi trebalo rešiti: tačne detalje njene teorije odlučivanja, kako rešiti logičku nesigurnost itd. Tako da tehnički problemi koje bi trebalo rešiti da bi ovo funkcionisalo izgledaju prilično teški - ne toliko kao pravljenje superinteligentne V.I., ali prilično teški. Problem je u sledećem: napraviti superinteligentnu V.I. je vrlo težak izazov. Pravljenje superinteligentne V.I. koja je bezbedna uključuje još neke dodatne teškoće. Postoji rizik da neko otkrije kako da razbije prvu teškoću, a da nije razbio i dodatnu teškoću obezbeđivanja savršene bezbednosti.
So I think that we should work out a solution to the control problem in advance, so that we have it available by the time it is needed. Now it might be that we cannot solve the entire control problem in advance because maybe some elements can only be put in place once you know the details of the architecture where it will be implemented. But the more of the control problem that we solve in advance, the better the odds that the transition to the machine intelligence era will go well.
Mislim da bi trebalo da nađemo rešenje za kontrolu problema unapred da bismo ga imali na raspolaganju kad dođe vreme. Ali možda ne možemo da rešimo ceo problem kontrole unapred, jer možda neki elementi mogu da se ubace tek kad znamo detaljnu strukturu na kojoj bi bilo primenjeno. Ali što veći deo problema kontrole rešimo unapred, to su bolje šanse da će tranzicija ka dobu mašinske inteligencije proteći dobro
This to me looks like a thing that is well worth doing and I can imagine that if things turn out okay, that people a million years from now look back at this century and it might well be that they say that the one thing we did that really mattered was to get this thing right.
To mi izgleda kao stvar koju bi vredelo uraditi i mogu da zamislim da ako stvari ispadnu okej, ljudi će za milion godina pogledati unazad na ovo doba i možda će reći da je ono što je bilo zaista važno bilo to da smo ovo uradili dobro.
Thank you.
Hvala vam.
(Applause)
(Aplauz)