Ahead of another patient visit, Maria recalled, “I just felt that something really bad was going to happen.” She texted Woebot, which explained the concept of catastrophic thinking. It can be useful to prepare for the worst, Woebot said—but that preparation can go too far. “It helped me name this thing that I do all the time,” Maria said. She found Woebot so beneficial that she started seeing a human therapist.

Woebot is one of several successful phone-based chatbots, some aimed specifically at mental health, others designed to provide entertainment, comfort, or sympathetic conversation. Today, millions of people talk to programs and apps such as Happify, which encourages users to “break old patterns,” and Replika, an “A.I. companion” that is “always on your side,” serving as a friend, a mentor, or even a romantic partner. The worlds of psychiatry, therapy, computer science, and consumer technology are converging: increasingly, we soothe ourselves with our devices, while programmers, psychiatrists, and startup founders design A.I. systems that analyze medical records and therapy sessions in hopes of diagnosing, treating, and even predicting mental illness. In 2021, digital startups that focussed on mental health secured more than five billion dollars in venture capital—more than double that for any other medical issue.

The scale of investment reflects the size of the problem. Roughly one in five American adults has a mental illness. An estimated one in twenty has what’s considered a serious mental illness—major depression, bipolar disorder, schizophrenia—that profoundly impairs the ability to live, work, or relate to others. Decades-old drugs such as Prozac and Xanax, once billed as revolutionary antidotes to depression and anxiety, have proved less effective than many had hoped; care remains fragmented, belated, and inadequate; and the over-all burden of mental illness in the U.S., as measured by years lost to disability, seems to have increased. Suicide rates have fallen around the world since the nineteen-nineties, but in America they’ve risen by about a third. Mental-health care is “a shitstorm,” Thomas Insel, a former director of the National Institute of Mental Health, told me. “Nobody likes what they get. Nobody is happy with what they give. It’s a complete mess.” Since leaving the N.I.M.H., in 2015, Insel has worked at a string of digital-mental-health companies.

The treatment of mental illness requires imagination, insight, and empathy—traits that A.I. can only pretend to have. And yet, Eliza, which Weizenbaum named after Eliza Doolittle, the fake-it-till-you-make-it heroine of George Bernard Shaw’s “Pygmalion,” created a therapeutic illusion despite having “no memory” and “no processing power,” Christian writes. What might a system like OpenAI’s ChatGPT, which has been trained on vast swaths of the writing on the Internet, conjure? An algorithm that analyzes patient records has no interior understanding of human beings—but it might still identify real psychiatric problems. Can artificial minds heal real ones? And what do we stand to gain, or lose, in letting them try?

John Pestian, a computer scientist who specializes in the analysis of medical data, first started using machine learning to study mental illness in the two-thousands, when he joined the faculty of Cincinnati Children’s Hospital Medical Center. In graduate school, he had built statistical models to improve care for patients undergoing cardiac bypass surgery. At Cincinnati Children’s, which operates the largest pediatric psychiatric facility in the country, he was shocked by how many young people came in after trying to end their own lives. He wanted to know whether computers could figure out who was at risk of self-harm.

Pestian contacted Edwin Shneidman, a clinical psychologist who’d founded the American Association of Suicidology. Shneidman gave him hundreds of suicide notes that families had shared with him, and Pestian expanded the collection into what he believes is the world’s largest. During one of our conversations, he showed me a note written by a young woman. On one side was an angry message to her boyfriend, and on the other she addressed her parents: “Daddy please hurry home. Mom I’m so tired. Please forgive me for everything.” Studying the suicide notes, Pestian noticed patterns. The most common statements were not expressions of guilt, sorrow, or anger, but instructions: make sure your brother repays the money I lent him; the car is almost out of gas; careful, there’s cyanide in the bathroom. He and his colleagues fed the notes into a language model—an A.I. system that learns which words and phrases tend to go together—and then tested its ability to recognize suicidal ideation in statements that people made. The results suggested that an algorithm could identify “the language of suicide.”

Next, Pestian turned to audio recordings taken from patient visits to the hospital’s E.R. With his colleagues, he developed software to analyze not just the words people spoke but the sounds of their speech. The team found that people experiencing suicidal thoughts sighed more and laughed less than others. When speaking, they tended to pause longer and to shorten their vowels, making words less intelligible; their voices sounded breathier, and they expressed more anger and less hope. In the largest trial of its kind, Pestian’s team enrolled hundreds of patients, recorded their speech, and used algorithms to classify them as suicidal, mentally ill but not suicidal, or neither. About eighty-five per cent of the time, his A.I. model came to the same conclusions as human caregivers—making it potentially useful for inexperienced, overbooked, or uncertain clinicians.

A few years ago, Pestian and his colleagues used the algorithm to create an app, called SAM, which could be employed by school therapists. They tested it in some Cincinnati public schools. Ben Crotte, then a therapist treating middle and high schoolers, was among the first to try it. When asking students for their consent, “I was very straightforward,” Crotte told me. “I’d say, This application basically listens in on our conversation, records it, and compares what you say to what other people have said, to identify who’s at risk of hurting or killing themselves.”

One afternoon, Crotte met with a high-school freshman who was struggling with severe anxiety. During their conversation, she questioned whether she wanted to keep on living. If she was actively suicidal, then Crotte had an obligation to inform a supervisor, who might take further action, such as recommending that she be hospitalized. After talking more, he decided that she wasn’t in immediate danger—but the A.I. came to the opposite conclusion. “On the one hand, I thought, This thing really does work—if you’d just met her, you’d be pretty worried,” Crotte said. “But there were all these things I knew about her that the app didn’t know.” The girl had no history of hurting herself, no specific plans to do anything, and a supportive family. I asked Crotte what might have happened if he had been less familiar with the student, or less experienced. “It would definitely make me hesitant to just let her leave my office,” he told me. “I’d feel nervous about the liability of it. You have this thing telling you someone is high risk, and you’re just going to let them go?”

Algorithmic psychiatry involves many practical complexities. The Veterans Health Administration, a division of the Department of Veterans Affairs, may be the first large health-care provider to confront them. A few days before Thanksgiving, 2005, a twenty-two-year-old Army specialist named Joshua Omvig returned home to Iowa, after an eleven-month deployment in Iraq, showing signs of post-traumatic stress disorder; a month later, he died by suicide in his truck. In 2007, Congress passed the Joshua Omvig Veterans Suicide Prevention Act, the first federal legislation to address a long-standing epidemic of suicide among veterans. Its initiatives—a crisis hotline, a campaign to destigmatize mental illness, mandatory training for V.A. staff—were no match for the problem. Each year, thousands of veterans die by suicide—many times the number of soldiers who die in combat. A team that included John McCarthy, the V.A.’s director of data and surveillance for suicide prevention, gathered information about V.A. patients, using statistics to identify possible risk factors for suicide, such as chronic pain, homelessness, and depression. Their findings were shared with V.A. caregivers, but, between this data, the evolution of medical research, and the sheer quantity of patients’ records, “clinicians in care were getting just overloaded with signals,” McCarthy told me.

Read More