Skip to main content

Mental health is a topic of great importance today. According to the World Health Organization (WHO), mental disorders account for 12.5% of all health problems worldwide. This means that approximately 450 million people struggle daily with mental health issues that significantly affect their lives. Focusing on Spain, a recent study reveals that nearly half of young people between 15 and 29 years old (48.9%) consider that they have had some mental health problem.

Despite the growing awareness of the importance of mental health, significant obstacles still exist in accessing care and treatment. The limited availability of public mental health services, combined with the high costs of private therapy services, often leaves many people facing their disorders or conditions alone, without the assistance they need. Added to this are personal difficulties, such as lack of time availability in a world that many people feel is a race, or introversion and secrecy, which can lead some people to avoid seeking help.

In this context, Artificial Intelligence (AI) emerges as a promising aid to improve access and efficiency in mental health care. AI-based tools can help overcome or reduce current obstacles by offering new forms of diagnosis, treatment, and monitoring. These tools can be directed at both therapists and patients. For therapists, it can be a great complement in their training and education, or even help during consultations through session analysis. Patients, on the other hand, can benefit from tools that accompany and guide them in their daily lives.

Artificial Intelligence (AI) emerges as a promising aid to improve access and efficiency in mental health care.

One of the most promising examples is the development of therapeutic chatbots. Chatbots, which are being propelled by the rapid development of Large Language Models (LLMs), allow users to communicate with them naturally. These AI systems, if specialized in mental health, can provide therapeutic support 24/7 from anywhere.

These therapeutic chatbots can help, for example, in patient education and performing therapy exercises outside of consultations. Additionally, through their interactions, they can also provide continuous monitoring and evaluation, allowing therapists to monitor patients’ progress and adjust their treatment accordingly.

However, in the realm of therapy, great care must be taken with the methods used, which must be thoroughly evaluated to ensure patient well-being. Flexibility, which is one of the great advantages of LLMs, can be the Achilles’ heel of this technology in its direct application in an environment that requires strict control. This adds to other challenges such as hallucinations, inaccuracies, or biases.

The key to integrating LLMs into the field of mental health lies in intelligent agents and their design. The chatbot with which the user ultimately interacts is a more complex system that combines different components where the LLM is just one part. The intelligent agent overcomes the aforementioned limitations and allows us to have more control over its behavior. This agent is equipped with a set of rules that guide it through the various steps of performing different therapy exercises. This achieves a balance between control and flexibility, as the agent ultimately uses the LLM to adapt the next response to the context of the conversation, thereby achieving a natural interaction.

Other complementary measures can also be applied. One is retraining these models with data from therapy sessions, which will help tailor responses to this type of scenario and, in turn, minimize inaccuracies. Another measure is to incorporate a component in the intelligent agent’s system that acts as an evaluator of its responses so that any response that does not meet certain criteria is rejected and does not reach the patient. This, although it seems like fighting fire with fire, can be performed by a second LLM specialized in this, as demonstrated by Meta’s Llama Guard model, which is capable of effectively classifying response safety.

With all this in mind, within the GTI-IA team (VRAIN, UPV), we are focusing on Emotional Regulation. This is the ability that allows us to manage emotions, both our own and those of others, and is crucial for people’s emotional well-being. The goal we set with this is to develop affective agents that, in addition to being able to adapt to patients’ or users’ emotions, can also help them regulate their emotions when necessary. In this way, the agents can assist people when they have difficulty reaching their emotional balance and increase their well-being.

One of our latest contributions to this field is a computational model for planning emotional regulation strategies. This model, which is personalized through the user’s personality traits and data from their previous interactions, will be the basis for developing affective agents capable of assisting in emotional regulation. Thus, the affective agents that use this model will be able to select the best actions available to help the user return from their current emotional state to their state of balance.

The aim of equipping agents with skills to manage emotions is to offer tools that help users achieve and maintain their mental well-being. This represents another step towards the integration of AI in Mental Health and the reduction of current obstacles in accessing therapy.

Referencias:

[1] https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/

[2] https://www.mdpi.com/2079-8954/12/3/77