°®ÎŰ´«Ă˝

Skip to Main Content Skip to bottom Skip to Chat, Email, Text

The new frontier of AI and mental health

Laurie Davies

Written by Laurie Davies

Hinrich Eylers, Vice Provost for Academic Operations and Doctoral Studies

Reviewed by Hinrich Eylers, PhD, PE, MBA, Vice Provost for Academic Operations and Doctoral Studies.

Meditating robot symbolizes the relationship between AI and mental health

When it comes to AI and mental health, rapidly evolving technologies are changing the game. It’s a brave new world out there. 

This doesn’t mean it’s a bad new world. It’s just that we’re still figuring it out.

Recognizing patterns. Making decisions. Learning from experience. Just like the people whose intelligence it simulates, AI isn’t all good or all bad. It’s agnostic in the algorithmic swirl that makes up its archetypes, chatbots and virtual interactions. Used in a balanced way, AI is showing promise as a tool that can affect our mental health in positive ways. 

For example, °®ÎŰ´«Ă˝â€™s Rodney Luster, PhD, LPC, is launching a study on AI and empathy — and the entire idea grew out of how an acquaintance of his used AI to help avert a really bad date. 

The acquaintance, following an argument with his girlfriend, had a litany of “emotional downloading” he wanted to do. (Read: He planned to give her a piece of his mind.) But first, he decided to use Character AI to set up a virtual role-playing scenario. 

For anyone in need of an AI tutorial — which is all of us at this point — “Character AI” allows users to interact with lifelike chatbot characters in ways that simulate real interactions. The more you build your character, the more other characters remember and “understand” you.

“Pouring into a simulated character what he planned to say to his girlfriend, he could see the hurt the other character experienced from the words he was using,” Luster says.

Rodney Luster, PhD, director of research strategy, innovation and development at UOPX's College of Doctoral Studies

Rodney Luster, PhD, LPC
Senior Director of Research Strategy, Innovation and Development

That got Luster, the senior director of research strategy, innovation and development for the Research Center Enterprise in the College of Doctoral Studies, so intrigued by the possibilities for AI to influence empathy that he got to work on designing a research study to see what he could find out. His findings will add to an early and growing body of work examining AI and mental health. 

“AI is like anything that comes out … it’s shiny, it’s attractive. It will probably have some good uses to it,” Luster says. “Will it (also) have the opposite effect? Yes.”

We’ll start with the first one, the pros.

Is there a correlation between AI and mental health? The pros.

Trying to understand AI and mental health requires a different perspective. Imagine you’ve just come out of a deep freeze, and you’re trying to get up to speed on 21st century America. Well, speed becomes the key word. The alacrity with which artificial intelligence has moved into business, education, healthcare and almost every other area of life is incredible.

But is AI in our minds too? Can it be? Should it be?

Ruminations about the ethical use of AI aside, the tool does hold promise with regard to positive mental health benefits. Consider the following.

Access

Apps aimed at offering mental health support are proliferating. In this way, AI offers a deep advantage. It’s accessible during the same hours that mental healthcare is needed: 24/7.

“If people have access to a computer or a phone, then they can use some of these tools and that’s kind of like 24/7 support,” Luster says. “Of course, AI is never going to replace a therapist, because you’re talking about the human quotient in that. However, as a mental health resource, it becomes available on a 24-hour timeline, which is pretty cool.”

Assessment

AI is showing some promise in helping detect mental health problems. This might be as innocuous as AI detecting via your smartwatch that your physical activity has gone from high levels to zero, possibly indicating depression. 

Or it could simply mean you’re on an impossible work deadline. Either way, a chatbot prompt to see if you’re OK or to remind you to move isn’t a bad thing.

A review of the literature on AI and mental health, published in , reveals that through use of technologies such as wearables or smartphone applications that offer real-time emotion data, AI can provide accurate detection (perhaps in some cases even surpassing human detection). 

In other cases, AI can help people connect patterns that empower them to take charge of their health. For example, AI tracking tools that provide users with insights to their emotional patterns over time can help individuals understand their triggers. This, in turn, can help them develop healthier responses to those triggers.

Another possible scenario could use algorithms to analyze MRI images, making early detection of difficult-to-diagnose diseases such as Alzheimer’s or Parkinson’s easier — thus making it possible to intervene earlier. 

Intervention

According to that same Frontiers in Digital Health study cited above, the digital triage capabilities of emerging AI-powered apps and programs may not stop at detecting mental health concerns. They might also be programmed to intervene.

For example, as AI analyzes a person’s answers related to mood, stress, energy and sleep patterns, a chatbot might recommend behavioral modifications such as physical activity, meditation and relaxation techniques. If the chatbot detects an immediate safety concern, it could promptly notify the patient’s healthcare provider.

If all this sounds a little too Big Brother-ish, consider that AI is showing enormous potential with certain, targeted populations.

For example, AI-powered therapeutic games and virtual reality experiences offer immersive environments that help those who experience emotional dysregulation practice their emotional regulation skills.

AI analysis of physiological and behavioral inputs from wearable technology and mobile devices is also showing promise for early detection of depression and bipolar disorder.

And in elderly populations, AI-enabled virtual companions can engage older adults in cognitive exercises and reminiscence therapy — addressing feelings of isolation and even cognitive decline.

AI and mental health: The cons

As with any new frontier, especially in today’s rush-to-market world, AI has its downsides. 

“Whether you’re in business or you’re someone who’s trying to work through issues, I think AI requires mindful, thoughtful implementation,” Luster says. “How are you using it? Are you being strategic while you’re guarding against the risk of things with it?”

Here are some of the potential downsides.

Less social interaction

An obvious drawback to the use of AI-generated applications and responses for something as deeply nuanced and internal as mental health is that it doesn’t substitute for human interaction. AI will never offer the compassion and human contact that come from sitting with a person.

Put a different way: “AI won’t release oxytocin,” Luster says. “It doesn’t have that human element.”

Proponents of AI’s mental health applications might say, “Well, it could trigger some of that.” But AI will never be the same as sitting across from a real human who can interpret facial reactions, tone and body mannerisms and then respond with empathy, gentility, reason and, if needed, a challenge.

Social isolation

Played out to an even deeper conclusion, AI may lead some users into a world of social isolation, with chatbots for friends and screens offering their only view into the world. The U.S. Surgeon General recently released an advisory calling isolation an epidemic, with the mortality consequences of social disconnection being roughly equivalent to smoking 15 cigarettes a day.

Luster cautions that if people rely on chatbots, app-driven interactions and the virtual world rather than human connection, then one of our core human needs — the need to belong — will go unmet in the real world. “AI will only go so far,” he says.

Accuracy

Innovative solutions to mental healthcare delivery are needed in a system that is currently overloaded. However, the speedy release of AI-enabled models has some sounding the alarm. Fast doesn’t mean factual. Accuracy with mental health diagnosis and intervention is critical. 

In a stroke of irony, the very AI-enabled transcript from the videoconferencing program used for the interview with Luster issued this warning prominently over the digitally generated transcript: “AI generated content may be incorrect.” So, there’s that.

A study published last year in the Journal of Medicine, Surgery, and Public Health suggests that some AI modelers are responding to the demand to approach mental health needs with accuracy. “Developing AI models that are interpretable and can provide explanations for their recommendations is a growing trend. This promotes transparency and allows clinicians and patients better to understand the reasoning behind AI-generated insights and decisions,” the study concludes.

But what about far less controlled settings, like the internet, for example?  

A points out that the application of generative AI to mental health subjects has a dark side. One study found that “AI tools have been providing users with harmful content surrounding eating disorders around 41% of the time.”

These are the kinds of issues that must be ironed out. But in the short term, Luster is as curious as anyone else. 

As for that acquaintance who was going to give his girlfriend a piece of his mind? AI actually changed his mind. When AI held up a mirror to the potential injury his words might cause to someone he cares about, he changed course.

“He later reflected on that, and when he went to visit his girlfriend that night, he changed his whole approach,” Luster says. “He enhanced his own emotional disposition and understanding more dimensionally by considering both parties’ feelings in this case.” 

Luster pauses before adding the most important part. “He said they had a wonderful evening.”

Learn more about AI and mental health

AI and mental health represent just one category of exploration. For those looking to enhance their AI knowledge in general, °®ÎŰ´«Ă˝ recently launched an undergraduate elective course that offers hands-on experience in an instructor-led environment. 

Known as Generative AI in Everyday Life, the course also builds durable skills like creativity and adaptability while teaching students how to effectively and ethically use AI.

Intrigued? Request additional information!

Headshot of Laurie Davies

ABOUT THE AUTHOR

A journalist-turned-marketer, Laurie Davies has been writing since her high school advanced composition teacher told her she broke too many rules. She has worked with °®ÎŰ´«Ă˝ since 2017, and currently splits her time between blogging and serving as lead writer on the University’s Academic Annual Report. Previously, she has written marketing content for MADD, Kaiser Permanente, Massage Envy, UPS, and other national brands. She lives in the °®ÎŰ´«Ă˝ area with her husband and son, who is the best story she’s ever written. 

Headshot of Hinrich Eylers

ABOUT THE REVIEWER

Dr. Eylers is the °®ÎŰ´«Ă˝ vice provost for Academic Operations and Doctoral Studies. Prior to joining the University in 2009, Dr. Eylers spent 15 years in environmental engineering consulting, sustainability consulting, teaching and business and technology program management. He was amongst the first to be licensed as a professional environmental engineer in Arizona.

checkmark

This article has been vetted by °®ÎŰ´«Ă˝'s editorial advisory committee. 
Read more about our editorial process.

Read more articles like this: