UMich students, staff debate use of AI for therapy

UMich students, staff debate use of AI for therapy


As the popularity of generative artificial intelligence continues to grow, more University of Michigan students have turned to the technology for counseling and therapeutic advice. U-M professors and students discuss the safety and effectiveness of these interactions and whether AI can fully replace in-person counseling advice.

Dan Adler, an incoming tenure-track assistant professor in computer science and engineering, studies the health care applications of AI. In an interview with The Michigan Daily, Adler said the rise in mental health diagnoses in the United States is causing longer waitlists for in-person counseling, such as the University’s Counseling and Psychological Services, leading to an increased usage of AI for therapy.

“There are a lot of individuals that do not have access to professional mental health care or face challenges finding affordable mental health care,” Adler said. “So it’s not a surprise that people would be going to technologies that are easily accessible, can be used 24/7 for mental health needs, if they do feel isolated and don’t feel like they have a way to reach out to a professional to get support for those needs.”

The kinds of conversations that occur between AI and an individual have a variety of differences from  those they would have with a therapist. Dr. Stephan F. Taylor, chair of the Department of Psychiatry at the Medical School, told The Daily most people utilize the technology to ask simple, low-stakes questions as opposed to having full therapy sessions.

“The majority of those questions are probably at a relatively low level of intensity, like ‘How do I deal with my girlfriend and what should I say to her?’, the sort of things that are questions that you would talk to your friend about,” Taylor said. “But sometimes the friend isn’t there, whereas the chatbot is always there.”

A student, who requested to remain anonymous because of the stigma associated with mental health, spoke to The Daily about their experience using AI for therapy. In this article, they will be referred to as Alex. Alex said they use the technology instead of traditional counseling services because they live outside of Michigan during breaks from school, making coordinating meetings difficult. They also said the time commitment prevents them from using in-person counselors.

“I just use AI for any moment where I feel like I have a lot of emotions to deal with, where I feel like I have to talk to someone about it, but it’s a very inconvenient time, or I just know people that I trust are busy at the moment,” Alex said. “So you know, instead of disturbing them, I just go to AI to let out everything.”

When Alex experiences these emotions, they discuss their feelings with either ChatGPT or Google Gemini. The AI chatbot listens to their situation while asking follow-up questions, similar to an in-person therapist.

“After maybe 30 minutes to an hour, depending on how I really feel and how well I feel at the end of that session, I’ll take that advice into account, and I think I really use it to understand the other person’s perspective,” Alex said. “It helps me understand my emotions and why I’m hurt.”

However, Emily Mower Provost, professor and senior associate chair in computer science and engineering, told The Daily AI users still need to question the guidance provided by these chatbots.

“If we’re worried about how others might interpret what we’re saying and we want to ask a question, but we’re worried about a human response or feeling judged, I think it’s very tempting to use things like AI,” Provost said. “We still should be very careful about what recommendations it’s making. It’s not a human therapist. It doesn’t have the same types of responsibility that human therapists have.”

Provost said another problem stems from developers training AI with information that stigmatizes mental health problems, allowing the potential for chatbot responses to echo a similar sentiment.

“Largely the systems that exist have just ingested huge amounts of data,” Provost said. “And if somewhere in the data, or many times in the data, we see examples of mental illness being stigmatized, then one of the things that it learns is to create text that has that flavor. It could speak about mental health conditions very critically. It could suggest things like isolation or removing access to people.”

Taylor said AI can also default to agreeing with the user instead of pushing back on faulty premises, which especially endangers people with psychosis who use technology like ChatGPT.

“They’re designed based on reinforcement learning with human feedback,” Taylor said. “They’re tuned to say things that make people happy. And so I think that phenomena can make a person who is prone to delusions amplify (those delusions).” 

Alex acknowledged that the AI they use agrees with their feelings often. However, they are not afraid to push back when they know the chatbot is approving an incorrect line of reasoning.

“If I share my perspective for a good amount of time, it’ll definitely convince itself that I’m with reason,” Alex said. “But obviously I know nothing is black or white, everything’s very gray, so I feel like I always have to be like, ‘I don’t think this is right’.”

Taylor said despite the concerns they have about the use of AI therapy, conversations with AI still can make people feel better, and the positives could possibly outweigh the negatives.

“If a person is lonely and they do have a relationship that makes them feel better, can we categorically say that’s always going to be bad?” Taylor said. “I would say no. Just because we don’t know and there’s risk doesn’t mean that it’s bad. We deal with treatments that involve risk all the time and we balance the risks and the benefits.”

In an attempt to mitigate problems associated with generic AI like ChatGPT, Public Health student Aarush Goel helped create WanderWell, an AI chatbot specifically focused on helping people with substance abuse disorders. Goel developed the app last year with a group of students in Public Health 555: Chatgpt/AI And Public Health and told The Daily he is currently working to expand it across the University. 

“A lot of students face pressure from their academics, as well as balancing a lot of work-life responsibilities and that has become a major driver towards increased substance abuse disorders in our community,” Goel said. “I think specifically AI can play a big role in actually being a positive resource, and this specific chatbot can be a major influence to really help students to seek help without feeling the stigma.”

However, Goel said WanderWell cannot replace in-person treatment and there are barriers in place to prevent the app from overstepping what it is capable of.

“When students inputted certain responses that ideated certain symptoms of suicide or other more extreme cases of mental symptoms, we had the AI chatbot specifically tell them to be referred to an in-person therapist or to seek a crisis hotline through the University of Michigan,” Goel said. “Through that, we tailored it so that it can address more generalized mental health needs, but for more extreme cases, in emergency situations specifically, we really focused on making sure that it wasn’t reproducing results without the proper expertise to do so.” 

Provost said the best use of AI in the mental health space is as a supplement to in-person counseling, not a replacement.

“It’s true not everyone can access care, and it’s very tempting to say that AI can fill the gap by allowing people to have care they wouldn’t otherwise have access to,” Provost said. “And while it’s not necessarily untrue, it’s really important to realize that where we are right now doesn’t exactly lend itself to that kind of replacement.”

Daily Staff Reporter Dominic Apap can be reached at dapap@umich.edu.



Source link