All right. Good afternoon, y’all. Let's talk about blending reality and imagination. But first, let's take a step back in time to 2001. As an 11-year-old in India, I became obsessed with computer graphics and visual effects. Of course, at that age, it meant making cheesy videos kind of like this. But therein started a foundational theme in my life, the quest to blend reality and imagination. And that quest has stayed with me and permeated across my decade-long career in tech, working as a product manager at companies like Google and as a content creator on platforms like YouTube and TikTok.
So today, let's deconstruct this quest to blend reality and imagination and explore how it’s getting supercharged -- buzzword alert -- by artificial intelligence. Let's start with the reality bit.
You probably heard about photogrammetry. It's the art and science of measuring stuff in the real world using photos and other sensors. What required massive data centers and teams of experts in the 2000s became increasingly democratized by the 2010s. Then, of course, machine learning came along and took things to a whole new level with techniques like neural radiance fields, or NeRFs.
What you're seeing here is an AI model creating a ground-up volumetric 3D representation using 2D images alone. But unlike older techniques for reality capture, NeRFs do a really good job of encapsulating the sheer complexity and nuance of reality. The vibe, if you will.
Twelve months later, you can do all of this stuff using the iPhone in your pocket, using apps like Luma. It's like 3D screenshots for the real world. Capture anything once and reframe it infinitely in postproduction, so you can start building that collection of spaces, places and objects that you truly care about and conjure them up in your future creations.
So that's the reality bit. As NeRFs were popping off last year, the AI summer was also in full effect, with Midjourney, DALL-E 2, Stable Diffusion all hitting the market around the same time. But what I fell in love with was inpainting. This technique allows you to take existing imagery and augment it with whatever you like, and the results are photorealistically fantastic. It blew my mind because stuff that would have taken me like three hours in classical workflows I could pull off in just three minutes.
But I wanted more. Enter ControlNet, a game-changing technique by Stanford researchers that allows you to use various input conditions to guide and control the AI image generation process. So in my case, I could take the depth information and the texture detail from my 3D scans and use it to literally reskin reality.
Now, this isn't just cool video. There’s a lot of useful use cases, too. For example, in this case I'm taking a 3D scan of my parents' drawing room, as my mother likes to call it, and reskinning it to different styles of Indian decor and doing so while respecting the spatial context and the layout of the interior space. If you squint, I'm sure you can see how this is going to transform architecture and interior design forever.
You could take that 2016 scan of a Buddha statue and reskin it to be gloriously golden while pulling off these impossible camera moves you just couldn't do any other way. Or you could take that vacation footage from your trip to Tokyo and bring these cherry blossoms to life in a whole new way. And let me tell you, cherry blossoms look really good during the day, but they look even better at night. Oh, my God. They sure are glowing.
It's almost like this dreamlike quality where you can use AI to accentuate the best aspects of the real world. Natural landscapes look just as beautiful. Like this waterfall that could be on another planet. But of course, you could go over the hills and far away to the French Alps from another dimension.
But it's not just static scenes. You can do this stuff with video, too. I can't wait till this technology is running at 30 frames per second because it's going to transform augmented reality and 3D rendering. I mean, how soon until we're channel-surfing realities layered on top of the real world?
Of course, just like reality capture got democratized, all these tools from last year are getting even easier. So instead of me spending hours weaving together a bunch of different tools, tools like Runway and Kaiber let you do exactly the same stuff with just a couple clicks. Want to go from day to night? No problemo. Want to get that retro 90s aesthetic from "Full House"? You can do that too.
But it goes beyond reality capture. Companies like Wonder Dynamics are turning video into this immaculate form of performance capture so you can embody fantastical creatures using the phone in your pocket. This is stuff that James Cameron only dreamt about in the 2000s. And now you could do it with your iPhone? That’s absolutely wild to me.
So when I look back at the past two decades and this ill-tailored tapestry of tools that I've had to learn, I feel a sense of optimism for what lies ahead for the next generation of creators. The 11-year-olds of today don't have to worry about all of that crap. All they need to do is have a creative vision and a knack for working in concert with these AI models, these AI models that are truly a distillation of human knowledge and creativity. And that's a future I'm excited about, a future where you can blend reality and imagination with your trusty AI copilot.
Thank you very much.
(Applause)