I grew up with parents who are engineers. They were among the first to bring computerized manufacturing to my hometown in India. Growing up as a young girl, I remember being fascinated how these computer programs didn't just reside within a computer, but touched the physical world and produced these beautiful and precise metal parts. Over the last two decades, as I pursued AI research, this memory continued to inspire me to connect the physical and digital worlds together.
I am working on AI that transforms the way we do science and engineering. Scientific research and engineering design currently involves a lot of trial and error. Many long hours are spent in the lab doing experiments. So it's not just the great ideas that propel science forward. You need these experiments to validate findings and spark new ideas.
How can language models help here? What if I ask ChatGPT to come up with a better design of an aircraft wing, or a drone that flies on a turbulent wind? It may suggest something. It may even draw something. But how do we know this is any good? We don't.
Language models hallucinate because they have no physical grounding. While language models may help generate new ideas, they cannot attack the hard part of science which is simulating the necessary physics to replace the Nab experiments. In order to model scientific and physical phenomena, text alone is not sufficient. To get to AI with universal physical understanding, we need to train it on the data of the world we observe. And not just that, also its hidden details.
From the intricacies of quantum chemistry that happen at the smallest level to molecules and proteins that influence how all biological processes work, to ocean currents and clouds that happen at planetary scales and beyond, we need AI that can capture these whole range of physical phenomena. We need AI that can really zoom into the fine details in order to simulate these phenomena accurately.
To capture the cloud movements, and predict how clouds move and change in our atmosphere, we need to be able to zoom into the fine details of the turbulent fluid flow. Standard deep learning uses a fixed number of pixels. So if you zoom in, it gets blurry and not all the details are captured.
We invented an AI technology called neural operators that represents the data as continuous functions or shapes, and allows us to zoom in indefinitely to any resolution or scale. Neural operators allow us to train on data at multiple scales or resolutions. And also allows us to incorporate the knowledge of mathematical equations to fill in the finer details when only limited resolution data is available. Such learning at multiple scales is essential for scientific understanding and neural operators enable this.
With neural operators, we can simulate physical phenomena such as fluid dynamics as much as a million times faster than traditional simulations. Last year, we used neural operators to invent a better medical catheter. A medical catheter is a tube that draws fluids out of the human body. Unfortunately, the bacteria tend to swim upstream against the fluid flow and infect the human. In fact, annually there is more than half a million cases of such healthcare-related infections, and this is one of the leading causes. Last year, we used neural operators to change the inside of the catheter from smooth to ridged. With ridges, now we have vortices created as the fluid flows, and we can hope to stop the bacteria from swimming upstream because of these vortices.
But to get this correct, we need the shape of the ridges to be exactly right. In the past, this would have been done by trial and error. Design a version of the catheter, build it out, take it to the lab, observe a hypothesis if something went wrong, rinse and repeat and redesign again.
But instead, we taught AI the behavior of the fluid flow inside the tube, and with it, our neural operator model was able to directly propose an optimized design. We 3D-printed the design only once to verify that it worked. In the video, you're seeing our catheter being tested in the lab. The bacteria are not able to swim upstream, are instead being pushed out with the fluid flow. In fact, we measured the reduction in bacterial contamination by more than 100-fold.
So in this case, the neural operators were specialized to understand fluid flow in a tube. What other applications can AI tackle and help us solve such pressing problems?
Can deep learning beat numerical weather models? A group of leading weather scientists asked this question in February 2021, in a "Royal Society" publication. They felt that AI was still in its infancy, and that a number of fundamental breakthroughs would be needed for AI to become competitive with traditional weather models, and that would take years or even decades. Exactly a year later, we released FourCastNet. Using neural operators, we built the first fully AI-based weather model that is high resolution and is tens of thousands of times faster than traditional weather models. What used to take a big supercomputer can now run on a gaming PC that you may have at home.
This model is also running at the European Centre for Medium-Range Weather Forecasting, one of the premier weather agencies of the world. And our AI model is not just tens of thousands of times faster than traditional models. It's also more accurate in many cases.
On September 16 last year, Hurricane Lee hit the coast of Nova Scotia, Canada. A full ten days earlier, our FourCastNet model correctly predicted that the hurricane would make landfall, but the traditional weather model predicted the hurricane would skip the coast. Only five days later, on September 11, did the traditional weather model correct its forecast to predict landfall.
Extreme weather events such as Hurricane Lee will only increase further unless we take action on climate change. Such as finding new, clean sources of energy. Nuclear fusion is one of them.
But unfortunately, there are still big challenges with it. The fusion reactor heats up the plasma to extremely high temperatures to get fusion started. And sometimes this hot plasma can escape confinement and can damage the reactor.
We train neural operators to simulate and predict the evolution of plasma inside the reactor. And with it, we can use this to predict disruptions before they occur and take corrective action in the real world. We are enabling the possibility of nuclear fusion becoming a reality.
So neural operators and AI broadly are enabling us to tackle hard scientific challenges such as climate change and nuclear fusion. To me, this is just the beginning.
So far, these AI models are limited to the narrow domains they're trained on. What if you had an AI model that could solve all and any scientific problem? From designing better drones, aircrafts, rockets, and even better drugs and medical devices? Such an AI model would greatly benefit humanity.
This is what we are working on. We are building a generalist AI model with emergent capabilities that can simulate any physical phenomena and generate novel designs that were previously out of reach. This is how we scale up neural operators to enable general intelligence with universal physical understanding.
Thank you.
(Applause)