On 29 January 2026, over 30 international experts in artificial intelligence, mental health, ethics, and public policy gathered for an online workshop organized by the Delft Digital Ethics Centre (DDEC) at the Delft University of Technology (TU Delft) – the first WHO Collaborating Centre on AI for health governance, including ethics.

Held as an official pre-summit event of the India AI Impact Summit 2026, with support from the World Health Organization, the workshop convened researchers, policy-makers, clinicians, and advocates. Dr Alain Labrique, Director of WHO’s Department of Data, Digital Health, Analytics and AI, noted: “As AI increasingly interacts with people in moments of emotional vulnerability, we as WHO and its stakeholders must ensure these systems are designed and governed with safety, accountability and human well-being at their core.”

Central among these challenges is the growing use of generative AI tools – neither designed nor tested for mental health – for emotional support, particularly by young people, and the potentially serious risks this may pose. “We are at a critical juncture”, Sameer Pujari, WHO’s AI Lead, remarked. “The pace of AI adoption in people’s daily lives has far outstripped investment in understanding its impact on mental health. Closing that gap requires coordinated action and dedicated resources from both the public and private sectors.”

Underscoring the importance of cross-disciplinary collaboration, Dr Kenneth Carswell of WHO’s Department of Noncommunicable Diseases and Mental Health added: “Minimizing risks from generative AI for mental health while maximizing benefits requires bringing together the voices of those most affected, clinical and research expertise, governance and regulatory frameworks, and data to inform understanding. WHO is committed to ensuring that users’ well‑being stays at the centre as these tools evolve.”

Key recommendations

The workshop distilled these discussions into three principal recommendations:

first, generative AI use should be recognized as a public mental health concern, with commensurate responses across government, health systems, and industry that address all generative AI solutions, not only those intended for mental health;second, mental health should be integrated into impact assessments and monitoring of AI solutions to better understand their effects on determinants of health, short-term clinical measures, and long-term outcomes, such as emotional dependence. One workshop participant stressed: “We need independent investments to test these effects”;third, AI tools used for mental health support should be co-designed with mental health experts and people with lived experience, including youth. Tools must be grounded in the best available evidence and tailored to cultural, linguistic, and contextual factors. Workshop participants emphasized the importance of consumer empowerment, while TU Delft’s Dr Caroline Figueroa highlighted the urgent need for consensus on crisis referral frameworks and accountability systems. Collaborating Centres: a strategic pillar for responsible AI

More broadly, the workshop illustrated how the WHO Collaborating Centre mechanism has become a critical pillar in implementing the WHO’s vision for responsible AI in health. Through this mechanism, WHO mobilizes world-class academic expertise and convenes diverse international stakeholders to generate evidence-based recommendations in support of its standard-setting role. As Dr Stefan Buijsman, managing director of the DDEC, noted: “As a WHO Collaborating Centre, we can increase impact by collaborating with experts around the world, domain experts, and governments.”

Looking ahead: building a global consortium

WHO is establishing a Consortium of Collaborating Centres on AI for Health, a network of leading institutions across all six WHO regions, to support Member States in the responsible adoption of AI. A pre-convening of candidate consortium members took place on 17–19 March 2026 at TU Delft, where institutions aligned on shared priorities and agreed on initial collaboration mechanisms to build the collaborative infrastructure needed to ensure that AI governance in health is grounded in evidence, ethics, and the needs of diverse populations worldwide.

Comments are closed.