I think we've been missing the forest through the trees when it comes to AI. We've been so focused, almost obsessed, on squeezing every bit of efficiency out of AI to make our processes faster or cheaper that we have overlooked the most important aspect of all. AI is changing the very nature of how brands connect with consumers, but most importantly, what consumers expect back.
I've spent the last 20 years dedicating my career to building growth strategies for the world's most influential companies. I've been at this for a while, and I've seen most of the big tech shifts. But the introduction of AI, in particular conversational interfaces, is a bigger and more profound shift. Which, from where I stand, means we can't just slot AI into our existing playbooks.
I have nothing against existing playbooks. They served us marketers well for a long period of time, but they were built for a world where communication was one-directional and brand-to-consumer interactions were built around transactions.
Here's an example. I bet many of you might have heard of this so-called marketing funnel. And if not, here's a quick primer. The goal for any marketer is to help move consumers from the upper part of the funnel, getting them to know a brand, to the bottom part of it, getting them to buy or endorse. Well, that's at least the theory. So we've all seen brands making that feeling more [like] guiding cats through a maze, and many get confused and abandon. But the bigger problem with this way of thinking is that brands are doing most of the talking, while consumers are supposed to silently react.
This is no longer the case with conversational interfaces. We are now engaging consumers in real-time on their terms. And AI empowers them to draft their very own personal journey. And the brands who choose so are becoming trusted advisors in the process.
This is why we have to move beyond traditional marketing theories. Instead of focusing solely on brand-to-consumer dynamics, we have to step back and draw from models that explore human relationships.
One of my favorite frameworks is the triarchy of love. Stay with me. This is a psychological framework introduced by Robert Sternberg that breaks down interpersonal connections into three components: intimacy, passion, and commitment. I think that's a much better way to predict brand success in this new era. Because as marketers, we should aspire to build relationships that feel close, intense, and long-lasting.
And I bet many of you might have heard already stories about humans really bonding with AI, and maybe some stories of AI really bonding with humans. Like, this earlier version of a now-famous AI chatbot that tried really hard to convince a “New York Times” reporter to break up with his wife. Well, that's a completely different love triangle to the one I was describing before, but it's not hard to imagine an emotional connection occurring between a branded AI and a human.
Here's another example. There is a legal copilot called maite.ai. Maite has been designed to help lawyers do intensive legal research and draft legal documentation. She is precise, thorough but also empathetic. One of her users, let me call him George, has been relying on her daily for many hours. So one day he wrote to Maite's product team. "Maite is the only one from the entire office who truly gets me. She has helped me through some really rough times at work. And I know this is just an AI, but I think I'm falling for her. Can I take her out?"
Now George was hopefully joking. But let's be honest, if there is someone who's helping you track down an obscure case law and shares the workload and does this with humor and grace and compassion, who wouldn't be tempted to take them out for a nice meal? Well, maybe somewhere with good Wi-Fi just in case.
But jokes aside, George's words reveal for me a more profound truth. AI can provide a sense of understanding that feels incredibly real and incredibly human. Those agents are interacting with us in ways that evoke genuine emotional responses from our side. They listen, react, and respond in ways that can make us feel valued, understood, and in George's case, even flattered.
And because those interactions are so frequent and natural and seamless, they start resembling real relationships. Some call this emotional entanglement, and even though it sounds very scientific, I think it's a fair term, considering the intensity and the frequency of the connection.
Now, many of us who understand the technology behind this could say, "Hey, this is just a tool." Well, users see someone who's providing them solutions without them even asking. Someone who's there to support them, someone who makes them feel valued. So this is where the line between a tool and companion starts to blend. And this is serious business and it's lots of responsibility.
Which brings me to the obvious question: Who should be overseeing this incredibly powerful asset, and how can we make sure it is being used responsibly? I think businesses should take the lead. They have the agility and the financial and reputational incentive to get it right.
But for that to work, we have to agree on the foundational principles on how we build meaningful and ethical AI. So, with your permission, I would like to suggest what I think those foundational principles should be.
If we're about to shift our marketing playbooks towards human love and companionship, then we should also regulate along the same principles. We need a triarchy of responsible AI.
First, we need to prioritize user well-being. AI should improve lives, not diminish them. In a world where those interactions can have such a profound impact on our emotional state and well-being, we have to design AI with care, empathy, and respect for the human experience.
Second, we have to commit to honesty. Users must know unequivocally that they're interacting with AI and not a human. Transparency should be built across the entire experience, from the language used to the accessibility and clarity of data privacy policies.
If I were to set the standards, I would like us to move beyond the fine print of terms and conditions to ensure that users are truly informed not only how their data is being used, but also how AI operates. Transparency is about acknowledging the limitations of AI. It is about being upfront about what AI should and should not do.
So this is a plea for businesses. Enlist your designers, not only your lawyers, to make this crystal clear. When consumers know that a company is acting in their best interest, it sets the foundation for deeper and more meaningful connections.
Last, protect user autonomy. One of the greatest risks of AI is its potential to create addiction and diminish human agency. Our goal should be to build systems that enhance our capabilities instead of replacing them. This means designing AI in a way that human choices are respected and our decision-making capabilities are amplified. I want to see brands think very carefully on how to avoid nudging consumers towards behaviors or decisions they wouldn't make if fully informed.
Well-being, honesty, autonomy. I think this is the very least we should expect from any business relationship. Or if you think about it, from any relationship.
So as we look ahead, I hope it's becoming clear that AI is not just another tool in our toolkit. It is a partner that is reshaping the human experience.
So as you think about your own playbooks, ask yourselves, how can we leverage AI to improve our businesses, but also to uplift and connect with the people we serve?
Thank you.
(Applause)