I dag hjælper kunstig intelligens læger med at diagnosticere patienter, piloter med at flyve kommercielle flyere og byplanlæggere med at forudsige trafik. Men uanset hvad AI laver, så ved de dataloger, der har designet dem sandsynligvis ikke præcist, hvordan de gør det. Det er fordi, kunstig intelligens ofte er selvlært og arbejder ud fra et sæt af simple instruktioner for at skabe en unik række regler og strategier. Men hvordan lærer en maskine?
Today, artificial intelligence helps doctors diagnose patients, pilots fly commercial aircraft, and city planners predict traffic. But no matter what these AIs are doing, the computer scientists who designed them likely don’t know exactly how they’re doing it. This is because artificial intelligence is often self-taught, working off a simple set of instructions to create a unique array of rules and strategies. So how exactly does a machine learn?
Der er mange måder at bygge selvlærende programmer. Men de er alle afhængige af de tre grundlæggende typer af maskinlæring: Uovervåget læring, overvåget læring og forstærkende læring. For at se dem i aktion lad os forestille os, at nogle forskere forsøger at indhente information fra et medicinsk datasæt, der indeholder tusindvis af patientprofiler.
There are many different ways to build self-teaching programs. But they all rely on the three basic types of machine learning: unsupervised learning, supervised learning, and reinforcement learning. To see these in action, let’s imagine researchers are trying to pull information from a set of medical data containing thousands of patient profiles.
Først har vi uovervåget læring. Denne tilgang ville være ideel til at analysere alle profilerne, for at finde generelle ligheder og nyttige mønstre. Måske har nogle patienter lignende sygdomspræsentationer, eller måske giver en behandling et specifikt sæt af bivirkninger. Denne brede mønstersøgende tilgang kan bruges til at identificere ligheder mellem patientprofiler og finde nye mønstre - alt sammen uden menneskelig vejledning.
First up, unsupervised learning. This approach would be ideal for analyzing all the profiles to find general similarities and useful patterns. Maybe certain patients have similar disease presentations, or perhaps a treatment produces specific sets of side effects. This broad pattern-seeking approach can be used to identify similarities between patient profiles and find emerging patterns, all without human guidance.
Men forestil dig, at lægerne leder efter noget mere specifikt. Lægerne vil gerne lave en algoritme til at diagnosticere en bestemt sygdom. De begynder med at indsamle to datasæt - medicinske billeder og testresultater fra både raske patienter og dem, der er diagnosticeret med sygdommen. Derefter indtastes denne data i et program designet til at identificere træk fælles for de syge patienter, men ikke hos de raske parienter. Baseret på hvor ofte den ser visse fællestræk, vil programmet tildele værdier til trækkenes diagnostiske betydning, og generere en algoritme til diagnosticering af fremtidige patienter. Men i modsætning til uovervåget læring har læger og dataloger en aktiv rolle i, hvad der sker herefter. Lægerne stiller den endelige diagnose og kontrollere nøjagtigheden i algoritmens forudsigelse. Så kan datalogerne bruge de opdaterede datasæt til at justere programmets parametre og øge dets nøjagtighed. Denne tilgang kaldes for overvåget læring.
But let's imagine doctors are looking for something more specific. These physicians want to create an algorithm for diagnosing a particular condition. They begin by collecting two sets of data— medical images and test results from both healthy patients and those diagnosed with the condition. Then, they input this data into a program designed to identify features shared by the sick patients but not the healthy patients. Based on how frequently it sees certain features, the program will assign values to those features’ diagnostic significance, generating an algorithm for diagnosing future patients. However, unlike unsupervised learning, doctors and computer scientists have an active role in what happens next. Doctors will make the final diagnosis and check the accuracy of the algorithm’s prediction. Then computer scientists can use the updated datasets to adjust the program’s parameters and improve its accuracy. This hands-on approach is called supervised learning.
Lad os nu sige, at lægerne vil at designe en anden algoritme til at anbefale behandlingsplaner. Siden planerne vil blive implementeret i etaper, og de kan ændres afhængigt af den enkeltes reaktion på behandlingen, beslutter lægerne at bruge forstærkende læring. Dette program bruger en iterativ tilgang til at indsamle feedback om hvilke lægemidler, doseringer og behandlinger, der er mest effektive. Derefter sammenlignes denne data med patientens profil, for at give dem en unik og optimal behandlingsplan. Efterhånden som behandlingerne skrider frem og programmet modtager mere feedback, kan det konstant opdatere planen for hver patient. Ingen af disse tre teknikker er bedre end de andre. Mens nogle kræver mere eller mindre menneskelig indgriben, har de deres forskellige styrker og svagheder, som gør dem bedst egnet til bestemte opgaver. Men ved at bruge dem sammen kan forskere bygge komplekse AI-systemer, hvor individuelle programmer kan overvåge og undervise hinanden. For eksempel, når vores uovervågede læringsprogram finder grupper af patienter, der ligner hinanden, kunne det sende denne data til et tilsluttet overvåget læringsprogram. Dette program kunne så inkorporere denne information i sine forudsigelser. Eller måske kunne snesevis af forstærkende læringsprogrammer simulere potentielle patientresultater for at indsamle feedback om forskellige behandlingsplaner.
Now, let’s say these doctors want to design another algorithm to recommend treatment plans. Since these plans will be implemented in stages, and they may change depending on each individual's response to treatments, the doctors decide to use reinforcement learning. This program uses an iterative approach to gather feedback about which medications, dosages and treatments are most effective. Then, it compares that data against each patient’s profile to create their unique, optimal treatment plan. As the treatments progress and the program receives more feedback, it can constantly update the plan for each patient. None of these three techniques are inherently smarter than any other. While some require more or less human intervention, they all have their own strengths and weaknesses which makes them best suited for certain tasks. However, by using them together, researchers can build complex AI systems, where individual programs can supervise and teach each other. For example, when our unsupervised learning program finds groups of patients that are similar, it could send that data to a connected supervised learning program. That program could then incorporate this information into its predictions. Or perhaps dozens of reinforcement learning programs might simulate potential patient outcomes to collect feedback about different treatment plans.
Der er mange måder at lave maskinlæringssystemer på, og de måske mest lovende modeller er dem, der efterligner forholdet mellem neuroner i hjernen. Disse kunstige neurale netværk kan bruge millioner af forbindelser på at tackle svære opgaver, som billedgenkendelse, talegenkendelse og endda sprogoversættelse. Men jo mere selvstyrende modellerne bliver, jo sværere er det for dataloger at bestemme hvordan de selvlærte algoritmer, når frem til deres resultater. Forskere ser allerede på måder at gøre maskinlæring mere transparent. Men efterhånden som AI fylder mere i vores hverdag har disse gådefulde beslutninger stadig større indvirkning på vores arbejde, sundhed og sikkerhed. Efterhånden som maskiner fortsat lærer at undersøge, forhandle og kommunikere, må vi også overveje, hvordan vi skal undervise dem i at lære hinanden, hvordan man handler etisk.
There are numerous ways to create these machine-learning systems, and perhaps the most promising models are those that mimic the relationship between neurons in the brain. These artificial neural networks can use millions of connections to tackle difficult tasks like image recognition, speech recognition, and even language translation. However, the more self-directed these models become, the harder it is for computer scientists to determine how these self-taught algorithms arrive at their solution. Researchers are already looking at ways to make machine learning more transparent. But as AI becomes more involved in our everyday lives, these enigmatic decisions have increasingly large impacts on our work, health, and safety. So as machines continue learning to investigate, negotiate and communicate, we must also consider how to teach them to teach each other to operate ethically.