U.S. News: AI Can’t Replace Therapists – But It Can Help Them
For a young adult who is lonely or just needs someone to talk to, an artificial intelligence chatbot can feel like a nonjudgmental best friend, offering encouragement before an interview or consolation after a breakup. AI’s advice seems sincere, thoughtful and even empathic – in short, very human. But when a vulnerable person alludes to thoughts of suicide, AI is not the answer. Not by itself, at least.
Recent stories have documented the heartbreak of people dying by suicide after seeking help from chatbots rather than fellow humans. In this way, the ethos of the digital world – sometimes characterized as “move fast and break things” – clashes with the health practitioners’ oath to “first, do no harm.” When humans are being harmed, things must change.
As a researcher and licensed therapist with a background in computer science, I am interested in the intersection between technology and mental health, and I understand the technological foundations of AI. When I directed a counseling clinic, I sat with people in their most vulnerable moments. These experiences prompt me to consider the rise of therapy chatbots through both a technical and clinical lens.
AI, no matter how advanced, lacks the morality, responsibility and duty of care that humans carry. When someone has suicidal thoughts, they need human professionals to help. With years of training before we are licensed, we have specific ethical protocols to follow when a person reveals thoughts of suicide.