IA therapy becomes reality, not always under control

- Jackson Avery

Researchers from Dartmouth University have developed a generative artificial intelligence interface (AI) for psychotherapy, which is an alternative to unregulated applications whose mental health professionals fear dangers.

“Even if we multiplied by ten the number of professionals, it would not be enough” to meet the current demand, pleads Nick Jacobson. “So we need something different to answer them.”

Unlike many start-ups that have already put their IA therapy application on the market, these researchers do not want to hurry. For Michael Heinz, a psychiatrist and co -pilot of the project, “we are talking about years than months” before putting online.

“We must still dig on the field of security”, justifies the academic, “having really a guaranteed understanding of the functioning of these things before we can get started”.

To develop their interface, the team first used consultations with consultations, then training videos, but found itself “in a dead end”, recalls Nick Jacobson, psychologist. She finally resolved to enter, manually, simulations of conversations, to cover the widest possible field and ensure the quality of the responses.

At the end of March, Dartmouth researchers published the first clinical study of the genre, which shows that Therapot improves the condition of patients with anxiety, depression or food behavior disorders.

“I see a future with scientifically tested chatbots (…) and developed for mental health purposes,” comments Vaile Wright, responsible for innovation within the American Psychology Association.

Patron of the Earkick platform, which has more than 100,000 users, mainly in the United States, Herbert Bay refuses the label attached to other start-ups and ensures that his IA therapist, baptized Panda, is “super safe”.

“What happened with Character AI could not happen with us,” claims this serial entrepreneur, in reference to the suicide in October of a young 14 -year user of this other application, whose mother questioned the role of the chatbot in this gesture.

“Take advantage”

Earkick, which is currently carrying out a clinical study, has implemented alerts in the event of a crisis or suicidal ideas, spotted by the AI ​​model in a conversation. “The very serious cases are not for an AI,” he says.

“Calling your therapist at two in the morning,” said Herbert Bay, “it’s just not possible”, while the chatbot is permanently available.

Subject to post-traumatic stress disorders, Darren dialogued with Chatgpt, which, unlike Earkick or Therabot, is not specifically designed as a mental health tool. “I feel like it succeeds,” he says.

“If this technology can be used safely under the supervision of a professional, I see great potential,” says Darlene King, of the American Psychiatry Association.

“These products are developed to make a profit,” points out Vaile Wright, and the models therefore seek “to keep users on the platform as long as possible (…) by telling them exactly what they want to hear”. A phenomenon to which children are even more vulnerable, she insists.

Jackson Avery

Jackson Avery

I’m a journalist focused on politics and everyday social issues, with a passion for clear, human-centered reporting. I began my career in local newsrooms across the Midwest, where I learned the value of listening before writing. I believe good journalism doesn’t just inform — it connects.