So I'm excited to share a few spicy thoughts on artificial intelligence. But first, let's get philosophical by starting with this quote by Voltaire, an 18th century Enlightenment philosopher, who said, "Common sense is not so common." Turns out this quote couldn't be more relevant to artificial intelligence today. Despite that, AI is an undeniably powerful tool, beating the world-class "Go" champion, acing college admission tests and even passing the bar exam.
Uzbuđena sam da podelim nekoliko pikantnih misli o veštačkoj inteligenciji. Prvo pak, bacimo se na filozofiju počinjući sa ovim citatom Voltera, filozofa prosvetiteljstva iz XVIII veka, koji je rekao: „Zdrav razum nije tako uobičajen.” Ispostavlja se da ovaj citat ne bi mogao biti relevantniji za današnju veštačku inteligenciju. Uprkos tome, VI je nesumnjivo moćno oruđe, pobeđuje šampiona svetske klase u Gou, ubedljiva je na prijemnim ispitima, pa čak i polaže pravosudni ispit.
I’m a computer scientist of 20 years, and I work on artificial intelligence. I am here to demystify AI. So AI today is like a Goliath. It is literally very, very large. It is speculated that the recent ones are trained on tens of thousands of GPUs and a trillion words. Such extreme-scale AI models, often referred to as "large language models," appear to demonstrate sparks of AGI, artificial general intelligence. Except when it makes small, silly mistakes, which it often does. Many believe that whatever mistakes AI makes today can be easily fixed with brute force, bigger scale and more resources. What possibly could go wrong?
Već 20 godina sam kompjuterska naučnica i radim na veštačkoj inteligenciji. Ovde sam da demistifikujem VI. Dakle, današnja VI je poput Golijata. Bukvalno je veoma, veoma velika. Spekuliše se da su skorašnje obučene na desetinama hiljada grafičkih procesora i bilionima reči. Slični modeli VI ekstremnih razmera, koji se često označavaju kao „veliki jezički modeli”, čini se da pokazuju iskre VOI, veštačke opšte inteligencije. Osim kada prave malene, sulude greške, a to često rade. Mnogi veruju da kakve god greške da VI danas napravi, lako mogu da se poprave sirovom silom, većim razmerama i sa više resursa. Šta bi uopšte moglo da pođe po zlu?
So there are three immediate challenges we face already at the societal level. First, extreme-scale AI models are so expensive to train, and only a few tech companies can afford to do so. So we already see the concentration of power. But what's worse for AI safety, we are now at the mercy of those few tech companies because researchers in the larger community do not have the means to truly inspect and dissect these models. And let's not forget their massive carbon footprint and the environmental impact.
Imamo tri trenutna izazova s kojima se već susrećemo na nivou društva. Prvo, ekstremne razmere modela VI su toliko skupe za obučavanje, i svega nekoliko tehnoloških kompanija to može da priušti. Dakle, već svedočimo koncentraciji moći. Međutim, što je još gore po VI bezbednost, trenutno smo prepušteni na milost nekolicine ovih tehnoloških kompanija jer istraživači u široj zajednici nemaju sredstva da istinski ispitaju i seciraju ove modele. I ne zaboravimo njihov ogroman karbonski otisak i uticaj na okolinu.
And then there are these additional intellectual questions. Can AI, without robust common sense, be truly safe for humanity? And is brute-force scale really the only way and even the correct way to teach AI?
A potom imamo dodatna pitanja o inteligenciji. Može li VI, bez razvijenog zdravog razuma, uistinu biti bezbedna za čovečanstvo? I da li je upotreba sirove sile u širenju njenih razmera zaista jedini način ili uopšte ispravan način za podučavanje VI?
So I’m often asked these days whether it's even feasible to do any meaningful research without extreme-scale compute. And I work at a university and nonprofit research institute, so I cannot afford a massive GPU farm to create enormous language models. Nevertheless, I believe that there's so much we need to do and can do to make AI sustainable and humanistic. We need to make AI smaller, to democratize it. And we need to make AI safer by teaching human norms and values. Perhaps we can draw an analogy from "David and Goliath," here, Goliath being the extreme-scale language models, and seek inspiration from an old-time classic, "The Art of War," which tells us, in my interpretation, know your enemy, choose your battles, and innovate your weapons.
Ovih dana me često pitaju da li je uopšte izvodljivo obaviti ikakvo smisleno istraživanje bez ekstremnih računarskih performansi. Radim na univerzitetu i neprofitnom istraživačkom institutu, te ne mogu da priuštim masivne farme GPU kako bih stvorila ogromne jezičke modele. Ipak, verujem da toliko toga moramo da uradimo i možemo da uradimo kako bi VI bila održiva i humanistička. Moramo da VI učinimo manjom, da je demokratizujemo. I moramo da učinimo VI bezbednijom učeći je ljudskim normama i vrednostima. Možda možemo da napravimo analogiju sa „Davidom i Golijatom”, Golijat su ovde jezički modeli ekstremnih razmera, kao i da potražimo inspiraciju u drevnom klasiku „Umetnost ratovanja”, koji nam kaže, prema mom tumačenju, upoznajte neprijatelja, birajte bitke i unapređujte oružja.
Let's start with the first, know your enemy, which means we need to evaluate AI with scrutiny. AI is passing the bar exam. Does that mean that AI is robust at common sense? You might assume so, but you never know.
Započnimo sa prvim, upoznajte neprijatelja, to znači moramo pažljivo da procenjujemo VI. VI polaže pravosudni ispit. Da li to znači da je VI ovladala zdravim razumom? Mogli biste to da pretpostavite, ali nikad ne znate.
So suppose I left five clothes to dry out in the sun, and it took them five hours to dry completely. How long would it take to dry 30 clothes? GPT-4, the newest, greatest AI system says 30 hours. Not good. A different one. I have 12-liter jug and six-liter jug, and I want to measure six liters. How do I do it? Just use the six liter jug, right? GPT-4 spits out some very elaborate nonsense.
Zato pretpostavimo da sam ostavila pet tkanina da se suše na suncu i da im je trebalo pet sati da se skroz osuše. Koliko je potrebno da se osuši 30 tkanina? GPT-4, najnoviji, najbolji sistem VI kaže 30 sati. Nije dobro. Drugi primer. Imam bokal od 12 i šest litara, i želim da izmerim šest litara. Kako da to postignem? Prosto uzmete bokal od šest litara, zar ne? GPT-4 izbacuje istinski krajnje razrađenu besmislicu.
(Laughter)
(Smeh)
Step one, fill the six-liter jug, step two, pour the water from six to 12-liter jug, step three, fill the six-liter jug again, step four, very carefully, pour the water from six to 12-liter jug. And finally you have six liters of water in the six-liter jug that should be empty by now.
Prvi korak, napunite bokal od šest litara, drugi korak, prelijte vodu iz bokala od šest u bokal od 12 litara, treći korak, napunite ponovo bokal od šest litara, četvrti korak, što opreznije, uspite vodu iz bokala od šest u bokal od 12 litara. I konačno imate šest litara vode u bokalu od šest litara koji bi do sad trebalo da je prazan.
(Laughter)
(Smeh)
OK, one more. Would I get a flat tire by bicycling over a bridge that is suspended over nails, screws and broken glass? Yes, highly likely, GPT-4 says, presumably because it cannot correctly reason that if a bridge is suspended over the broken nails and broken glass, then the surface of the bridge doesn't touch the sharp objects directly.
U redu, još jedan. Da li ću probušiti gumu, ako biciklom pređem preko mosta koji visi iznad eksera, šrafova i polomljenog stakla? Da, krajnje verovatno, kaže GPT-4, verovatno jer ne može da ispravno rezonuje da ukoliko most visi iznad polomljenih eksera i stakla, onda površina mosta direktno ne dotiče oštre objekte.
OK, so how would you feel about an AI lawyer that aced the bar exam yet randomly fails at such basic common sense? AI today is unbelievably intelligent and then shockingly stupid.
U redu, kako biste se osećali da imate advokata VI koji je ubedljivo položio pravosudni ispit, pa ipak nasumično podbacuje u sličnim osnovama zdravog razuma? Današnja VI je neverovatno inteligentna, a opet i zapanjujuće glupa.
(Laughter)
(Smeh)
It is an unavoidable side effect of teaching AI through brute-force scale. Some scale optimists might say, “Don’t worry about this. All of these can be easily fixed by adding similar examples as yet more training data for AI." But the real question is this. Why should we even do that? You are able to get the correct answers right away without having to train yourself with similar examples. Children do not even read a trillion words to acquire such a basic level of common sense.
Radi se o neizbežnoj nuspojavi obučavanja VI putem sirove sile razmera. Neke pristalice razmera bi rekli: „Ne brinite zbog ovoga. Sve ovo lako može da se popravi dodavanjem sličnih primera kao dodatnih podataka za obučavanje VI.” Međutim, istinsko pitanje glasi: zašto bismo to uopšte radili? U stanju ste da momentalno tačno odgovorite bez potrebe da se podučavate na sličnim primerima. Deca ni ne čitaju bilione reči kako bi stekli sličan osnovni nivo zdravog razuma.
So this observation leads us to the next wisdom, choose your battles. So what fundamental questions should we ask right now and tackle today in order to overcome this status quo with extreme-scale AI? I'll say common sense is among the top priorities.
Stoga nas ovo zapažanje vodi do sledeće mudrosti, birajte bitke. Dakle, koja bi temeljna pitanja trebalo da trenutno postavljamo i da se njima bavimo danas kako bismo prevazišli status kvo kod VI ekstremnih razmera? Tvrdila bih da je zdrav razum među glavnim prioritetima.
So common sense has been a long-standing challenge in AI. To explain why, let me draw an analogy to dark matter. So only five percent of the universe is normal matter that you can see and interact with, and the remaining 95 percent is dark matter and dark energy. Dark matter is completely invisible, but scientists speculate that it's there because it influences the visible world, even including the trajectory of light. So for language, the normal matter is the visible text, and the dark matter is the unspoken rules about how the world works, including naive physics and folk psychology, which influence the way people use and interpret language.
Zdrav razum je istrajni izazov za VI. Da bih objasnila zašto, dozvolite da napravim analogiju sa tamnom materijom. Dakle, svega pet procenata univerzuma je normalna materija koju vidite i s kojom interagujete, a preostalih 95 procenata je tamna materija i tamna energija. Tamna materija je skroz nevidljiva, ali naučnici spekulišu da postoji jer utiče na vidljivi svet, uključujući čak i putanju svetlosti. U slučaju jezika, normalna materija je vidljivi tekst, a tamna materija su neizreciva pravila o tome kako svet funkcioniše, uključujući naivnu fiziku i narodnu psihologiju, koja utiču na to kako ljudi koriste i tumače jezik.
So why is this common sense even important? Well, in a famous thought experiment proposed by Nick Bostrom, AI was asked to produce and maximize the paper clips. And that AI decided to kill humans to utilize them as additional resources, to turn you into paper clips. Because AI didn't have the basic human understanding about human values. Now, writing a better objective and equation that explicitly states: “Do not kill humans” will not work either because AI might go ahead and kill all the trees, thinking that's a perfectly OK thing to do. And in fact, there are endless other things that AI obviously shouldn’t do while maximizing paper clips, including: “Don’t spread the fake news,” “Don’t steal,” “Don’t lie,” which are all part of our common sense understanding about how the world works.
Zašto je zdrav razum uopšte važan? Dakle, u čuvenom misaonom eksperimentu, koji je predložio Nik Bostrom, od VI je zatraženo da proizvede najveću moguću količinu spajalica. A VI je odlučila da ubije ljude kako bi ih iskoristila kao dodatni resurs, da vas pretvori u spajalice. Jer VI nije imala osnovno ljudsko razumevanje o ljudskim vrednostima. Sad, ispisivanje boljeg cilja i jednačine koja eksplicitno navodi: „Ne ubijaj ljude”, takođe neće funkcionisati jer se VI možda odvaži i ubije sve drveće, misleći da je sasvim u redu da to uradi. I zapravo, postoji bezbroj drugih stvari koje VI očito ne bi trebalo da uradi dok maksimalno uvećava broj spajalica, uključujući: „Ne širi lažne vesti”, „Ne kradi”, „Ne laži”, koje su sve deo našeg zdravorazumskog razumevanja toga kako svet funkcioniše.
However, the AI field for decades has considered common sense as a nearly impossible challenge. So much so that when my students and colleagues and I started working on it several years ago, we were very much discouraged. We’ve been told that it’s a research topic of ’70s and ’80s; shouldn’t work on it because it will never work; in fact, don't even say the word to be taken seriously. Now fast forward to this year, I’m hearing: “Don’t work on it because ChatGPT has almost solved it.” And: “Just scale things up and magic will arise, and nothing else matters.”
Međutim, zdrav razum se decenijama u oblasti VI smatra gotovo nemogućim izazovom. Toliko da kada smo moji studenti, kolege i ja počeli time da se bavimo pre nekoliko godina, prilično smo obeshrabrivani. Rečeno nam je da je to tema za istraživanje iz ’70-ih i ’80-ih; ne bi trebalo time da se bavimo jer nikad neće funkcionisati; zapravo, ni samu reč ne izgovarajte, ako želite da vas ozbiljno shvate. Sad premotajte do ove godine, slušam: „Ne bavite se time jer je to ChatGPT skoro rešio.” I: „Samo povećajte razmere i magija će se desiti, a ostalo nije važno.”
So my position is that giving true common sense human-like robots common sense to AI, is still moonshot. And you don’t reach to the Moon by making the tallest building in the world one inch taller at a time. Extreme-scale AI models do acquire an ever-more increasing amount of commonsense knowledge, I'll give you that. But remember, they still stumble on such trivial problems that even children can do.
Moj stav je da pružanje istinskog zdravog razuma, zdravog razuma čovekolikim robotima i VI, i dalje nalikuje pohodu na mesec. A ne dosežete mesec tako što uvećavate najvišu zgradu centimetar po centimetar. Modeli VI ekstremnih razmera usvajaju sve veće i veće količine zdravorazumskog znanja, nesumnjivo je tako. Međutim, upamtite, i dalje se spotiču o tako trivijalne probleme koje čak i deca znaju da rešavaju.
So AI today is awfully inefficient. And what if there is an alternative path or path yet to be found? A path that can build on the advancements of the deep neural networks, but without going so extreme with the scale.
Stoga je današnja VI užasno neefikasna. A šta ako postoji alternativni put ili put koji tek treba otkriti? Put koji bi nadograđivao postignuća dubokih neuronskih mreža, a da ne mora da bude toliko ekstreman u razmerama.
So this leads us to our final wisdom: innovate your weapons. In the modern-day AI context, that means innovate your data and algorithms. OK, so there are, roughly speaking, three types of data that modern AI is trained on: raw web data, crafted examples custom developed for AI training, and then human judgments, also known as human feedback on AI performance. If the AI is only trained on the first type, raw web data, which is freely available, it's not good because this data is loaded with racism and sexism and misinformation. So no matter how much of it you use, garbage in and garbage out. So the newest, greatest AI systems are now powered with the second and third types of data that are crafted and judged by human workers. It's analogous to writing specialized textbooks for AI to study from and then hiring human tutors to give constant feedback to AI. These are proprietary data, by and large, speculated to cost tens of millions of dollars. We don't know what's in this, but it should be open and publicly available so that we can inspect and ensure [it supports] diverse norms and values. So for this reason, my teams at UW and AI2 have been working on commonsense knowledge graphs as well as moral norm repositories to teach AI basic commonsense norms and morals. Our data is fully open so that anybody can inspect the content and make corrections as needed because transparency is the key for such an important research topic.
Ovo nas vodi do naše poslednje mudrosti: unapređujte oružja. U savremenom kontekstu VI to znači unapređujte vaše podatke i algoritme. U redu, postoji, ugrubo govoreći, tri tipa podataka na kojima se obučava savremena VI: sirovi podaci sa interneta, osmišljeni primeri izrađeni po meri zarad obučavanja VI i potom ljudske procene, takođe poznate kao ljudske povratne informacije na performanse VI. Ukoliko se VI jedino obučava na prvom tipu sirovih podataka sa veba, koji su lako dostupni, nije dobro jer su ti podaci krcati rasizmom, seksizmom i lažnim informacijama. Te nije važno u kojoj meri ih koristite, smeće ulazi, smeće izlazi. Stoga se najnoviji, najbolji sistemi VI trenutno opskrbljavaju drugim i trećim tipom podataka koje osmišljavaju i prosuđuju ljudski radnici. Analogno je pisanju specijalizovanih udžbenika za VI da iz njih uči, a potom unajmljivanju ljudskih instruktora da stalno daju povratne informacije VI. Radi se sveukupno o autorskim podacima, za koje se spekuliše da koštaju desetine miliona dolara. Ne znamo šta je u njima, ali trebalo bi da budu otvoreni i javno dostupni da bismo mogli da ih ispitamo i obezbedimo različitost u normama i vrednostima. Iz tog razloga, moje ekipe sa Univerziteta u Vašingtonu i AI2 rade na grafikonima zdravorazumskog znanja kao i na repozitorijumima moralnih normi kako bismo podučavali VI zdravorazumskim normama i moralu. Naši podaci su u potpunosti otvoreni kako bi bilo ko mogao da ispita sadržaj i po potrebi napravi ispravke jer je transparentnost ključna za tako važnu istraživačku temu.
Now let's think about learning algorithms. No matter how amazing large language models are, by design they may not be the best suited to serve as reliable knowledge models. And these language models do acquire a vast amount of knowledge, but they do so as a byproduct as opposed to direct learning objective. Resulting in unwanted side effects such as hallucinated effects and lack of common sense. Now, in contrast, human learning is never about predicting which word comes next, but it's really about making sense of the world and learning how the world works. Maybe AI should be taught that way as well.
Sada, razmislimo o algoritmima za učenje. Bez obzira na to koliko su sjajni veliki jezički modeli, po svom sklopu možda nisu baš najpodesniji da služe kao pouzdani modeli znanja. A ovi jezički modeli usvajaju ogromne količine znanja, ali postižu to kao nusproizvod nasuprot direktnom cilju učenja. A to rezultira neželjenim posledicama poput halucinirajućih efekata i odsustva zdravog razuma. Sad, nasuprot tome, kod ljudskog učenja se nikad ne radi o predviđanju koja reč sledi, već se uistinu radi o razumevanju sveta i učenju kako svet funkcioniše. Možda bi i VI trebalo tako podučavati.
So as a quest toward more direct commonsense knowledge acquisition, my team has been investigating potential new algorithms, including symbolic knowledge distillation that can take a very large language model as shown here that I couldn't fit into the screen because it's too large, and crunch that down to much smaller commonsense models using deep neural networks. And in doing so, we also generate, algorithmically, human-inspectable, symbolic, commonsense knowledge representation, so that people can inspect and make corrections and even use it to train other neural commonsense models.
Stoga, u pohodu na usvajanje direktnijeg zdravorazumskog znanja moja ekipa je istraživala potencijalno nove algoritme, uključujući destilaciju simboličkog znanja koji mogu da uzmu izuzetno velike jezičke modele, kao što su ovde prikazani, koji nisu mogli da stanu na ekran jer su suviše veliki, i sažmu to u daleko manje zdravorazumske modele upotrebom dubokih neuronskih mreža. A radeći to, takođe stvaramo algoritamski i ljudski ispitljive simboličke reprezente zdravorazumskog znanja da bi ih ljudi mogli pregledati i napraviti ispravke, pa čak i koristiti da obučavaju druge zdravorazumske neuronske modele.
More broadly, we have been tackling this seemingly impossible giant puzzle of common sense, ranging from physical, social and visual common sense to theory of minds, norms and morals. Each individual piece may seem quirky and incomplete, but when you step back, it's almost as if these pieces weave together into a tapestry that we call human experience and common sense.
Uopštenije, bavili smo se ovom, na prvi pogled, nemogućom džinovskom slagalicom zdravog razuma, koja seže od psihološkog, društvenog i vizuelnog zdravog razuma do teorije umova, normi i morala. Svaki pojedinačni deo može izgledati nezgrapno i nedovršeno, ali kada se udaljite, gotovo da ovi delovi zajedno pletu tapiseriju koju nazivamo ljudskim iskustvom i zdravim razumom.
We're now entering a new era in which AI is almost like a new intellectual species with unique strengths and weaknesses compared to humans. In order to make this powerful AI sustainable and humanistic, we need to teach AI common sense, norms and values.
Ulazimo u novo doba u kome je VI gotovo poput nove intelektualne vrste sa jedinstvenim snagama i slabostima u poređenju s ljudima. Kako bismo ovu moćnu VI učinili održivom i humanističkom, moramo da podučavamo VI zdravom razumu, normama i vrednostima.
Thank you.
Hvala.
(Applause)
(Aplauz)
Chris Anderson: Look at that. Yejin, please stay one sec. This is so interesting, this idea of common sense. We obviously all really want this from whatever's coming. But help me understand. Like, so we've had this model of a child learning. How does a child gain common sense apart from the accumulation of more input and some, you know, human feedback? What else is there?
Kris Anderson: Pazi to. Jeđin, molim te ostani za sekund. Ovo je tako interesantno, ova ideja zdravog razuma. Svi to očito želimo odakle god da dolazi. Pomozi mi pak da shvatim. Dakle, imali smo taj model dečjeg učenja. Kako dete usvaja zdrav razum osim akumulacije sve više unosa i nekakvih, znaš, ljudskih povratnih informacija? Šta još imamo tu?
Yejin Choi: So fundamentally, there are several things missing, but one of them is, for example, the ability to make hypothesis and make experiments, interact with the world and develop this hypothesis. We abstract away the concepts about how the world works, and then that's how we truly learn, as opposed to today's language model. Some of them is really not there quite yet.
Jeđin Čoj: Dakle, temeljno nedostaje nekoliko stvari, ali jedna od njih je, na primer, sposobnost pravljenja hipoteze i vršenja eksperimenata, interagovanja sa svetom i razvijanja ove hipoteze. Apstrahujemo koncepte o tome kako svet funkcioniše, a onda tako istinski učimo, nasuprot trenutnim jezičkim modelima. Neki od njih uistinu nisu još ni blizu cilja.
CA: You use the analogy that we can’t get to the Moon by extending a building a foot at a time. But the experience that most of us have had of these language models is not a foot at a time. It's like, the sort of, breathtaking acceleration. Are you sure that given the pace at which those things are going, each next level seems to be bringing with it what feels kind of like wisdom and knowledge.
KA: Koristiš analogiju da ne možemo stići na Mesec uvećavajući zgradu pedalj po pedalj. Međutim, iskustvo koje većina nas ima o ovim jezičkim modelima nije izraženo u pedljima. Radi se o nekakvom ubrzanju koje oduzima dah. Jesi li sigurna da imajući u vidu ritam kojim se ove stvari kreću, svaki novi nivo izgleda kao da sa sobom nosi nešto što naliči mudrosti i znanju.
YC: I totally agree that it's remarkable how much this scaling things up really enhances the performance across the board. So there's real learning happening due to the scale of the compute and data.
JČ: U potpunosti se slažem da je izvanredno koliko uvećavanje razmera ovoga uistinu uvećava sveukupne performanse. Dakle, dešava se učenje zahvaljujući razmerama računarskih performansi i podacima.
However, there's a quality of learning that is still not quite there. And the thing is, we don't yet know whether we can fully get there or not just by scaling things up. And if we cannot, then there's this question of what else? And then even if we could, do we like this idea of having very, very extreme-scale AI models that only a few can create and own?
Međutim, određeni kvalitet učenja i dalje nedostaje. A radi se o tome da mi još uvek ne znamo da li to možemo u potpunosti postići ili ne pukim uvećavanjem razmera. A ako ne možemo, sledi pitanje šta onda? A potom ako možemo, da li nam se sviđa ideja da imamo VI modele veoma, veoma ekstremnih razmera koje može da stvara i poseduje svega nekolicina?
CA: I mean, if OpenAI said, you know, "We're interested in your work, we would like you to help improve our model," can you see any way of combining what you're doing with what they have built?
KA: Mislim, kada bi iz OpenAI rekli: „Zainteresovani smo za tvoj rad, voleli bismo da nam pomogneš da unapredimo naše modele”, vidiš li neki način kombinovanja onoga što radiš sa onim što su oni sagradili?
YC: Certainly what I envision will need to build on the advancements of deep neural networks. And it might be that there’s some scale Goldilocks Zone, such that ... I'm not imagining that the smaller is the better either, by the way. It's likely that there's right amount of scale, but beyond that, the winning recipe might be something else. So some synthesis of ideas will be critical here.
JČ: Zasigurno, ono što zamišljam će morati da se nadogradi na unapređenja dubokih neuronskih mreža. I možda se ispostavi da postoji zona Zlatokose u razmerama takva da... Ni ne zamišljam da je manje bolje, usput. Verovatno da postoji tačna veličina razmera, ali preko toga, pobednička formula bi mogla da bude nešto drugo. Dakle, nekakva sinteza ideja će da bude ključna ovde.
CA: Yejin Choi, thank you so much for your talk.
KA: Jeđin Čoj, mnogo ti hvala na tvom govoru.
(Applause)
(Aplauz)