I'd like to start by asking you to imagine a color that you've never seen before. Just for a second give this a try.
Can you actually visualize a color that you've never been able to perceive? I never seem to get tired of trying this although I know it's not an easy challenge. And the thing is, we can't imagine something without drawing upon our experiences. A color we haven't yet seen outside the spectrum we can perceive is outside our ability to conjure up. It's almost like there's a boundary to our imagination where all the colors we can imagine can only be various shades of other colors we have previously seen. Yet we know for a fact that those color frequencies outside our visible spectrum are there. And scientists believe that there are species that have many more photo receptors than just the three color ones we humans have.
Which, by the way, not all humans see the world in the same way. Some of us are colorblind to various degrees, and very often we don't even agree on small things, like if a dress on the internet is blue and black or white and gold. But my favorite creature, one of my favorite creatures, is the peacock mantis shrimp, which is estimated to have 12 to 16 photo receptors. And that indicates the world to them might look so much more colorful.
So what about artificial intelligence? Can AI help us see beyond our human capabilities?
Well, I've been working with AI for the past five years, and in my experience, it can see within the data it gets fed. But then you might be wondering, OK, if AI can't help imagine anything new, why would an artist see any point in using it? And my answer to that is because I think that it can help augment our creativity as there's value in creating combinations of known elements to form new ones. And this boundary of what we can imagine based on what we have experienced is the place that I have been exploring.
For me, it started with jellyfish on a screen at an aquarium and wearing those old 3D glasses, which I hope you remember, the ones with the blue and red lens. And this experience made me want to recreate their textures. But not just that, I also wanted to create new jellyfish that I hadn't seen before, like these. And what started with jellyfish, very quickly escalated to other sea creatures like sea anemone, coral and fish. And then from there came amphibians, birds and insects. And this became a series called “Neural Zoo”.
But when you look closely, what do you see? There's no single creature in these images. And AI augments my creative process by allowing me to distill and recombine textures. And that's something that would otherwise take me months to draw by hand. Plus I'm actually terrible at drawing.
So you could say, in a way, what I'm doing is a contemporary version of something that humans have already been doing for a long time, even before cameras existed. In medieval times, people went on expeditions, and when they came back they would share about what they saw to an illustrator. And the illustrator, having never seen what was being described, would end up drawing based on the creatures that they had previously seen and in the process creating hybrid animals of some sort. So an explorer might describe a beaver, but having never seen one, the illustrator might give it the head of a rodent, the body of a dog and a fish-like tail. In the series “Artificial Natural History”, I took thousands of illustrations from a natural history archives, and I fed them to a neural network to generate new versions of them. But up until now, all my work was done in 2D. And with the help of my studio partner, Feileacan McCormick, we decided to train a neural network on a data set of 3D scanned beetles. But I must warn you that our first results were extremely blurry, and they looked like the blobs you see here. And this could be due to many reasons, but one of them being that there aren't really a lot of openly available data sets of 3D insects. And also we were repurposing a neural network that normally gets used to generate images to generate 3D. So believe it or not, these are very exciting blobs to us.
But with time and some very hacky solutions like data augmentation, where we threw in ants and other beetle-like insects to enhance the data set, we ended up getting this, which we've been told they look like grilled chicken.
(Laughter)
But hungry for more, we pushed our technique, and eventually they ended up looking like this. We use something called 3D style transfer to map textures onto them, and we also trained a natural language model to generate scientific-like names and anatomical descriptions. And eventually we even found a network architecture that could handle 3D meshes. So they ended up looking like this. And for us, this became a way of creating kind of a speculative study --
(Applause)
A speculative study of creatures that never existed, kind of like a speculative biology.
But I didn't want to talk about AI and its potential unless it brought me closer to a real species. Which of these do you think is easier to find data about online?
(Laughter)
Yeah, well, as you guessed correctly, the red panda. And this maybe could be due to many reasons, but one of them being how cute they are, which means we photograph and talk about them a lot, unlike the boreal felt lichen. But both of them are classified as endangered. So I wanted to bring visibility to other endangered species that don't get the same amount of digital representation as a cute, fluffy red panda. And to do this, we trained an AI on millions of images of the natural world, and then we prompted with text to generate some of these creatures.
So when prompted with a text, "an image of a critically endangered spider, the peacock tarantula" and its scientific name, our model generated this. And here's an image of the real peacock tarantula, which is a wonderful spider endemic to India. But when prompted with a text "an image of a critically endangered bird, the mangrove finch," our model generated this. And here's a photo of the real mangrove finch. Both these creatures exist in the wild, but the accuracy of each generated image is fully dependent on the data available. These chimeras of our everyday data to me are a different way of how the future could be. Not in a literal sense, perhaps, but in the sense that through practicing the expanding of our own imagination about the ecosystems we are a part of, we might just be better equipped to recognize new opportunities and potential.
Knowing that there's a boundary to our imagination doesn't have to feel limiting. On the contrary, it can help motivate us to expand that boundary further and to seek out colors and things we haven't yet seen and perhaps enrich our imagination as a result.
So thank you.
(Applause)