Kelly Corrigan: So will you describe for us the average Woebot interaction?
Alison Darcy: Sure. So, well, first of all, I suppose it's important to say that we built Woebot to meet an unmet need. In 2017, depression was already the leading cause of disability worldwide. And I'm team human, too. And I also really believe in what human therapists do. But, you know, it doesn't matter how good a therapist is. You could be the best therapist in the world. But unless you’re with your patient at 2am when they are having a panic attack, you can't help them in that moment. And, you know, therapy doesn't happen in a vacuum. We all have real lives. And I was a clinical research psychologist, making some of the world's, you know, most sophisticated psychotherapeutic treatments. But I was always haunted by this idea that, like, it doesn't really matter how sophisticated the treatments are that we make if people can't access them. And so access has to be part of the design. And approachability has to be part of the design. Because what do you do at that 2am moment. And you can't think straight, you know, and you can't remember the thing that your therapist told you you should do in this moment. And so that, for me, is why we built Woebot, to meet people where they're at in those moments when it's actually hardest to reach out to another person.
KC: And how long do they stay on with you?
AD: So they're brief, very brief encounters. Six and a half minutes is the average length of time. And about 75 to 80 percent of all of those conversations are happening outside of clinic hours. The longest conversations people have are between 2 and 5 am.
KC: Yeah. And is Woebot good for roleplay?
AD: Actually, we have found that generative AI -- So Woebot was built to be rules-based. Everything Woebot says has been scripted by our writing team and under the supervision of clinical psychologists. And so it's very safe, it's on the rails. Woebot will never make up something new.
KC: Fantastic.
AD: But we have been, you know, exploring the generative models. And it turns out generative AI is really good for roleplays. And it kind of speaks to some of the advantages that AIs have and that they're really good at doing the stuff that humans aren't so great at. And I think roleplays are one of those things for sure. KC: Do people disclose more quickly with an AI than they would with a person?
AD: Yeah. So that was shown in an early study in about 2015, I believe, that people would rather disclose to an AI than when they believe there's a human behind that. And that's particularly pronounced for things that are perceived as very stigmatized. And so, yeah, there's a sort of an advantage to being an AI in that it's never judging you. You don't have to think about how you appear to the AI. So yeah.
KC: When I think about it, I think there’s at least four concerns come to mind. One is price, always, one is privacy, always, like, who gets these transcripts? One is control. Like, who defines what an AI responds and what theories and theses of change are they working from? But the one that scares the hell out of me is the perfection problem. And sometimes I wonder if we might inadvertently be creating the conditions for a total rejection of humanity, of like, dumb, boring, incomplete, half-asleep humans, when you could have this thing that is so hyper-responsive. Do you feel like people, once they find Woebot, they never want to leave it?
AD: Definitely not. But that's how it was designed, right? So Woebot would have been designed -- It all depends on what are you building the thing for. And if you're building it for, you know, human well-being, you know, human advancement, the objective is similar to a parent. You want the, success looks like individuation and independence and growth. And that's partly, you know, challenging the idea of perfection, just like you did. You know, a great AI should be helping you see that perfection is just an illusion, particularly when it comes to humans. That's what makes us human and that's, you know, something to be celebrated. But of course, to your point, it really depends on who is the designer and what is this AI being built for. That is going to be and is such a crucial question.
KC: Which goes to business model.
AD: Sure.
KC: So who pays for Woebot?
AD: Well, currently Woebot's distributed in partnership with health systems. But that's right, we build for, again, these short encounters, let people talk and be invited to use a skill that's inspired by one of these great therapeutic approaches, like cognitive behavioral therapy, and then get them back to their life as soon as possible. We never build for engagement, keeping people in the conversation as long as they can, which we just think is sort of a road to addiction, right? And that's all about the incentive. That's how are you being paid. And as entrepreneurs, we all have a responsibility to ensure that the AIs are in service of humans, not the other way around.
KC: Yeah. I'm thinking about Lenore Skenazy, who I now have this huge crush on, I bet we all do.
(Cheers and applause)
And she said that the best sentence you could ever hear from a kid is, "I did it myself." And I wonder if we create this dependence on AI therapy companions that you’ll never be able to say, “I did it myself.” None of us will. AD: Well, I still think the humans are doing it themselves, right? Because that's the beauty of an AI. It’s not a great therapeutic process, if you like. Well, this isn't really therapy, right? Structurally, it is so different. But a great process is just asking the person the right questions. They have to -- They're the ones that have to do all of the work. They're the ones that have to shift their mindset or acknowledge their role in a conflict with somebody or, you know, tune in to their deepest, darkest negative thinking. And that stuff is hard. And that is all on the person. The AI is just going to ask you the right questions to get there. So you know, I don't think -- this isn't giving advice or giving a diagnosis. It's very much and should be about helping people develop their own resources. It’s like, I use the analogy of, you know, those mechanical machines that shoot tennis balls at people so they can practice their swing to get better at the game with the human. You know, these are, fundamentally, tools. I believe that, and I think they should be built like that. And make sure that that is the objective function, if you like, is human betterment.
KC: Is there anything you do explicitly to push people back into IRL interactions?
AD: Oh, right. Well, yeah, exactly. That would be part of, you know, Woebot's kind of value set. You know, we constantly would talk somebody through, "Hey, you know, what is the point of avoidance." And if it is a discomfort with other people, then, Woebot will sort of encourage that person to follow through with speaking to another human. And then we'll come back a few days later and say, "Hey, you said you were going to talk to Lucy. Have you done it?" And we find actually in our data that, aside from the daily sort of check-ins. which facilitate sort of emotional self-awareness, that accountability is the thing that people find their most favored feature of this technology. So you want that kind of accountability.
KC: Do you have red lines? Has Woebot sat around and said, there's a whole set of things that people might do in this space that we are not going to do?
AD: Yeah, absolutely, loads. Like, give advice, diagnose, give away data, sell data, especially to advertisers. Flirt, right? Because that muddies the dynamic of what is happening here. You know, again, it has to be so clear what is the purpose of this conversation and what are we trying to achieve. And staying within that boundary is really important.
KC: Is the effectiveness of therapy getting better over time, or is this sort of element in the mix maybe going to increase the efficacy across the board? AD: See, this is the question, I think we haven't done a great job of innovating, I think, in psychotherapy. Forgive me, you know, some of my best friends are clinical psychologists, but we're not doing a great job. You know, since founding the company, things are much, much worse now. And it's interesting seeing all of this incredible innovation and technological advancement. And, you know, we haven't moved the needle at all. We are still as anxious and depressed as ever. And in fact, a recent World Health Organization survey or study found that 20 percent of high schoolers have seriously considered suicide. So this is getting so much worse. So something needs to change. And I think we need to expand the aperture and bring in tools, additional tools. It's never about replacing the great human therapists that we have. But most people aren't getting in front of a therapist. And even if they are, they're not there beside you as you live your life.
KC: Yeah. Could you imagine a point where you could put an AI on the kitchen table and then the family could have one of it's sort of little fights, shall we say. And then it would take the transcript and say, "Edward, you shouldn't have said this." And, “Kelly, you interrupted,” and ... Like, could you imagine that kind of feedback on the dynamics that are keeping a family cycling on the same dumb patterns over and over? Not being personal at all here, Edward.
(Laughter)
AD: As you were saying that, I was imagining my own family. I'm the youngest of six and just thinking I had a note that laptop flying through the window so fast.
(Laughter)
Yeah, like, again, this is a toolset, I believe. And, you know, we can build tools. We can use the tools in certain ways. But I think you're bringing up something else that's interesting in that, you know, it's not about replicating the models of therapeutic approaches that were built for human delivery. I think it's about leaning into now, what can the AIs bring to the table that's new and that's novel and that's specific to that technology, that toolset. And that's really, I think, the opportunity moving forward with these more advanced tools.
KC: I'm thinking about your comment about flirting, and my best friend is pretty sure that her therapist falls asleep on her. But she has bangs, and so she says that, like, she just kind of is like this and she can't tell if she's nodding off or just really thinking.
(Laughter)
And, you know, obviously, like, all therapists vary, parents vary. Do you have a thought about which has more potential for damage an AI or a human?
AD: That is a big question. I think the AIs have plenty of potential for damage, as do humans. And it's very early days with the technology. The thing is that we have the opportunity to develop AI with intentionality. And of course, there are unintended consequences, and we need to build those structures in addition to being able to monitor and watch those and take advantage of positive directions. So we'll see. Fundamentally, these are just tools. And also humanity is humanity for a reason. You know, there are things that are common. Pain is something, as we heard yesterday, that is common to all of us. And we will all go through difficult moments. We will all experience grief. We will all lose a loved one. And it's about understanding how we're going to work together.
But yeah, just to reiterate that point, we have to make sure that the tech is in service of humans, not the other way around.
KC: Thank you so much for coming to TED.
(Applause)