Just a year ago, Chukurah Ali had fulfilled a dream of owning her own bakery — Coco’s Desserts in St. Louis, Mo. — which specialized in the sort of custom-made ornate wedding cakes often featured in baking show competitions. Ali, a single mom, supported her daughter and mother by baking recipes she learned from her beloved grandmother.
But last February, all that fell apart, after a car accident left Ali hobbled by injury, from head to knee. “I could barely talk, I could barely move,” she says, sobbing. “I felt like I was worthless because I could barely provide for my family.”
As darkness and depression engulfed Ali, help seemed out of reach; she couldn’t find an available therapist, nor could she get there without a car, or pay for it. She had no health insurance, after having to shut down her bakery.
So her orthopedist suggested a mental-health app called Wysa. Its chatbot-only service is free, though it also offers teletherapy services with a human for a fee ranging from $15 to $30 a week; that fee is sometimes covered by insurance. The chatbot, which Wysa co-founder Ramakant Vempati describes as a “friendly” and “empathetic” tool, asks questions like, “How are you feeling?” or “What’s bothering you?” The computer then analyzes the words and phrases in the answers to deliver supportive messages, or advice about managing chronic pain, for example, or grief — all served up from a database of responses that have been prewritten by a psychologist trained in cognitive behavioral therapy.
That is how Ali found herself on a new frontier of technology and mental health. Advances in artificial intelligence — such as Chat GPT — are increasingly being looked to as a way to help screen for, or support, people who dealing with isolation, or mild depression or anxiety. Human emotions are tracked, analyzed and responded to, using machine learning that tries to monitor a patient’s mood, or mimic a human therapist’s interactions with a patient. It’s an area garnering lots of interest, in part because of its potential to overcome the common kinds of financial and logistical barriers to care, such as those Ali faced.
Potential pitfalls and risks of chatbot therapy
There is, of course, still plenty of debate and skepticism about the capacity of machines to read or respond accurately to the whole spectrum of human emotion — and the potential pitfalls of when the approach fails. (Controversy flared up on social media recently over a canceled experiment involving chatbot-assisted therapeutic messages.)
“The hype and promise is way ahead of the research that shows its effectiveness,” says Serife Tekin, a philosophy professor and researcher in mental health ethics at the University of Texas San Antonio. Algorithms are still not at a point where they can mimic the complexities of human emotion, let alone emulate empathetic care, she says.
Tekin says there’s a risk that teenagers, for example, might attempt AI-driven therapy, find it lacking, then refuse the real thing with a human being. “My worry is they will turn away from other mental health interventions saying, ‘Oh well, I already tried this and it didn’t work,’ ” she says.
But proponents of chatbot therapy say the approach may also be the only realistic and affordable way to address a gaping worldwide need for more mental health care, at a time when there are simply not enough professionals to help all the people who could benefit.
Someone dealing with stress in a family relationship, for example, might benefit from a reminder to meditate. Or apps that encourage forms of journaling might boost a user’s confidence by pointing when out where they make progress.
Proponents call the chatbot a ‘guided self-help ally’
It’s best thought of as a “guided self-help ally,” says Athena Robinson, chief clinical officer for Woebot Health, an AI-driven chatbot service. “Woebot listens to the user’s inputs in the moment through text-based messaging to understand if they want to work on a particular problem,” Robinson says, then offers a variety of tools to choose from, based on methods scientifically proven to be effective.
Many people will not embrace opening up to a robot.
Chukurah Ali says it felt silly to her too, initially. “I’m like, ‘OK, I’m talking to a bot, it’s not gonna do nothing; I want to talk to a therapist,” Ali says, then adds, as if she still cannot believe it herself: “But that bot helped!”
At a practical level, she says, the chatbot was extremely easy and accessible. Confined to her bed, she could text it at 3 a.m.
“How are you feeling today?” the chatbot would ask.
“I’m not feeling it,” Ali says she sometimes would respond.
The chatbot would then suggest things that might soothe her, or take her mind off the pain — like deep breathing, listening to calming music, or trying a simple exercise she could do in bed. Ali says things the chatbot said reminded her of the in-person therapy she did years earlier. “It’s not a person, but, it makes you feel like it’s a person,” she says, “because it’s asking you all the right questions.”
Technology has gotten good at identifying and labeling emotions fairly accurately, based on motion and facial expressions, a person’s online activity, phrasing and vocal tone, says Rosalind Picard, director of MIT’s Affective Computing Research Group. “We know we can elicit the feeling that the AI cares for you,” she says. But, because all AI systems actually do is respond based on a series of inputs, people interacting with the systems often find that longer conversations ultimately feel empty, sterile and superficial.
While AI may not fully simulate one-on-one individual counseling, its proponents say there are plenty of other existing and future uses where it could be used to support or improve human counseling.
AI might improve mental health services in other ways
“What I’m talking about in terms of the future of AI is not just helping doctors and [health] systems to get better, but helping to do more prevention on the front end,” Picard says, by reading early signals of stress, for example, then offering suggestions to bolster a person’s resilience. Picard, for example, is looking at various ways technology might flag a patient’s worsening mood — using data collected from motion sensors on the body, activity on apps, or posts on social media.
Technology might also help improve the efficacy of treatment by notifying therapists when patients skip medications, or by keeping detailed notes about a patient’s tone or behavior during sessions.
Maybe the most controversial applications of AI in the therapy realm are the chatbots that interact directly with patients like Chukurah Ali.
What’s the risk?
Chatbots may not appeal to everyone, or could be misused or mistaken. Skeptics point to instances where computers misunderstood users, and generated potentially damaging messages.
But research also shows some people interacting with these chatbots actually prefer the machines; they feel less stigma in asking for help, knowing there’s no human at the other end.
Ali says that as odd as it might sound to some people, after nearly a year, she still relies on her chatbot.
“I think the most I talked to that bot was like 7 times a day,” she says, laughing. She says that rather than replacing her human health care providers, the chatbot has helped lift her spirits enough so she keeps those appointments. Because of the steady coaching by her chatbot, she says, she’s more likely to get up and go to a physical therapy appointment, instead of canceling it because she feels blue.
That’s precisely why Ali’s doctor, Washington University orthopedist Abby Cheng, suggested she use the app. Cheng treats physical ailments, but says almost always the mental health challenges that accompany those problems hold people back in recovery. Addressing the mental-health challenge, in turn, is complicated because patients often run into a lack of therapists, transportation, insurance, time or money, says Cheng, who is conducting her own studies based on patients’ use of the Wysa app.
“In order to address this huge mental health crisis we have in our nation — and even globally — I think digital treatments and AI can play a role in that, and at least fill some of that gap in the shortage of providers and resources that people have,” Cheng says.
Not meant for crisis intervention
But getting to such a future will require navigating thorny issues like the need for regulation, protecting patient privacy and issues of legal liability. Who bears responsibility if the technology goes wrong?
Many similar apps on the market, including those from Woebot or Pyx Health, repeatedly warn users that they are not designed to intervene in acute crisis situations. And even AI’s proponents argue computers aren’t ready, and may never be ready, to replace human therapists — especially for handling people in crisis.
“We have not reached a point where, in an affordable, scalable way, AI can understand every sort of response that a human might give, particularly those in crisis,” says Cindy Jordan, CEO of Pyx Health, which has an app designed to communicate with people who feel chronically lonely.
Jordan says Pyx’s goal is to broaden access to care — the service is now offered in 62 U.S. markets and is paid for by Medicaid and Medicare. But she also balances that against worries that the chatbot might respond to a suicidal person, ” ‘Oh, I’m sorry to hear that.’ Or worse, ‘I don’t understand you.’ ” That makes her nervous, she says, so as a backup, Pyx staffs a call center with people who call users when the system flags them as potentially in crisis.
Woebot, a text-based mental health service, warns users up front about the limitations of its service, and warnings that it should not be used for crisis intervention or management. If a user’s text indicates a severe problem, the service will refer patients to other therapeutic or emergency resources.
Cross-cultural research on effectiveness of chatbot therapy is still sparse
Athena Robinson, chief clinical officer for Woebot, says such disclosures are critical. Also, she says, “it is imperative that what’s available to the public is clinically and rigorously tested,” she says. Data using Woebot, she says, has been published in peer-reviewed scientific journals. And some of its applications, including for post-partum depression and substance use disorder, are part of ongoing clinical research studies. The company continues to test its products’ effectiveness in addressing mental health conditions for things like post-partum depression, or substance use disorder.
But in the U.S. and elsewhere, there is no clear regulatory approval process for such services before they go to market. (Last year Wysa did receive a designation that allows it to work with Food and Drug Administration on the further development of its product.)
It’s important that clinical studies — especially those that cut across different countries and ethnicities — continue to be done to hone the technology’s intelligence and its ability to read different cultures and personalities, says Aniket Bera, an associate professor of computer science at Purdue.
“Mental-health related problems are heavily individualized problems,” Bera says, yet the available data on chatbot therapy is heavily weighted toward white males. That bias, he says, makes the technology more likely to misunderstand cultural cues from people like him, who grew up in India, for example.
“I don’t know if it will ever be equal to an empathetic human,” Bera says, but “I guess that part of my life’s journey is to come close.”
And, in the meantime, for people like Chukurah Ali, the technology is already a welcome stand-in. She says she has recommended the Wysa app to many of her friends. She says she also finds herself passing along advice she’s picked up from the app, asking friends, “Oh, what you gonna do today to make you feel better? How about you try this today?”
It isn’t just the technology that is trying to act human, she says, and laughs. She’s now begun mimicking the technology.
Leave a Reply