We need to ensure that these technologies are safe and protect users’ mental well-being rather than put it at risk

Falk Gerrik Verhees

The research team emphasizes that the transparency requirement of the European AI Act – simply informing users that they are interacting with AI – is not enough to protect vulnerable groups. They call for enforceable safety and monitoring standards, supported by voluntary guidelines to help developers with implementing safe design practices. 

As solution they propose linking future AI applications with persistent chat memory to a so-called “Guardian Angel” or “Good Samaritan AI” – an independent, supportive AI instance to protect the user and intervene when necessary. Such an AI agent could detect potential risks at an early stage and take preventive action, for example by alerting users to support resources or issuing warnings about dangerous conversation patterns.
Recommendations for safe interaction with AI 

In addition to implementing such safeguards, the researchers recommend robust age verification, age-specific protections, and mandatory risk assessments before market entry. “As clinicians, we see how language shapes human experience and mental health,” says Falk Gerrik Verhees, psychiatrist at Dresden University Hospital Carl Gustav Carus. “AI characters use the same language to simulate trust and connection – and that makes regulation essential. We need to ensure that these technologies are safe and protect users’ mental well-being rather than put it at risk,” he adds. 

The researchers argue that clear, actionable standards are needed for mental health-related use cases. They recommend that LLMs clearly state that they are not an approved mental health medical tool. Chatbots should refrain from impersonating therapists, and limit themselves to basic, non-medical information. They should be able to recognize when professional support is needed and guide users toward appropriate resources. Effectiveness and application of these criteria could be ensured through simple open access tools to test chatbots for safety on an ongoing basis. 

“Our proposed guardrails are essential to ensure that general-purpose AI can be used safely and in a helpful and beneficial manner,” concludes Max Ostermann, researcher in the Medical Device Regulatory Science team of Prof. Gilbert and first author of the publication in npj Digital Medicine.

Important note:
In times of a personal crisis please seek help at a local crisis service, contact your general practitioner, a psychiatrist/psychotherapist or in urgent cases go to the hospital. In Germany you can call 116 123 (in German) or find offers in your language online at www.telefonseelsorge.de/internationale-hilfe.

Source: Dresden University of Technology 

Comments are closed.