Spring 2024 Bulletin

Mental Health and AI

By
Kate Carter
Project
AI in Mental Health Care
A person sitting in a field with their arms around a dog on their right and a robot on their left.
Photo by iStock.com/miriam-doerr.

By Kate Carter, Program Officer for Science, Engineering, and Technology at the Academy

Mental health in America is a looming crisis, silently corroding the fabric of society. Despite increased awareness, the statistics paint a sobering picture: one in five adults grapple with mental illness annually, yet access to adequate care remains challenging, especially in rural areas. Artificial Intelligence (AI) and other emerging technologies can significantly transform mental health care by providing tailored interventions, early detection tools, and convenient therapy options if concerns about access, ethics, and equity are addressed.

These issues were at the forefront of a Mental Health and AI exploratory meeting held at the Academy on March 11–12, 2024. Chaired by Alan Leshner (American Association for the Advancement of Science) and Paul Dagum (Applied Cognition), the meeting convened computer science, medicine, psychiatry, sociology, and policy experts to discuss emerging technology’s potential and pitfalls for diagnosing and treating mental health disorders.

The participants agreed that AI has already changed the landscape of mental health. As more Americans suffer from mental illness and cost and location become more prohibitive toward treatment, a growing number are turning to AI-powered chatbots for cognitive behavioral therapy. Some practitioners are already using AI to analyze brain scans to detect physical indicators of disease. AI-driven predictive analytics can help clinicians identify personalized treatment options and anticipate potential relapses, enhancing the effectiveness of mental health research and interventions. 

The future of AI is more revolutionary, and at least now, more uncertain. Several attendees acknowledged that we lack an understanding of mental health disorders, and AI could be instrumental in improving our knowledge base and allowing for better definitions and categorization of mental health disorders. However, there remained significant disagreement about its potential for treatment. Some saw AI as an aide to human practitioners, one more in a list of available tools to save time, provide more precise diagnosis, and monitor patients’ moods between sessions. Others envisioned AI as an eventual replacement for human providers, especially for individuals currently receiving insufficient care.

A mix of AI technologists and human practitioners initiated another discussion about providing treatment that was ethical and equitable. Some participants expressed skepticism about AI’s lack of humanity, asking questions like whether an entity that will never experience death can effectively provide comfort. Others wondered about the potential conflict between AI maintaining user engagement and delivering the most beneficial messages to the patient, if not most welcome. The group pondered the ultimate risk of testing AI, in which optimal outcomes are less clear than in other areas of health care, and mistakes at the treatment level can cause severe and permanent damage.

Many were quick to point out the cultural variability in reactions to treatment, showing data suggesting some users prefer chatbots to human therapists while others lie or withhold data. Some attendees noted that advanced AI features like natural language processing only work for some languages, creating issues with access for all people. Others emphasized that inequitable access already exists for many rural populations, indicating that AI-powered treatments improve the current lack of treatment. Finally, several people indicated that building and testing models must benefit current and future patients and urged for equitable design and the creation of policy guardrails.

While multiple groups are starting to develop AI-specific policies, few guardrails exist to govern the use of AI in mental health. Personal data breaches and the lack of mental health insurance reimbursements are significant concerns that cannot be ignored. Moreover, current regulations have failed to keep up with the pace of technology. However, establishing regulations and policies around data ownership and health care payer models can mitigate the risks and help ensure that the benefits of AI are accessible to all who need them.

During the second day of the meeting, participants discussed ways to continue to develop this work. They were excited about the potential of this technology and the ways that the Academy could lead in guiding the research and policy to ensure ethical and equitable applications. The Academy is currently exploring the space of mental health and technology in greater depth.   

Share