The growing use of large language models (LLMs) as accessible tools for mental health support presents both opportunities and challenges, and a team led by Lingyao Li and Xiaoshan Huang from McGill University, alongside Renkai Ma and Ben Zefeng Zhang from Stony Brook University, are investigating how people actually experience these interactions. Researchers systematically analysed posts from multiple social media platforms to understand user sentiment regarding LLM chatbots across a range of mental health conditions. This work reveals a nuanced picture, demonstrating that experiences vary significantly depending on individual circumstances, with users reporting positive outcomes for conditions like ADHD and autism, but more negative responses when dealing with higher-risk disorders such as schizophrenia or bipolar disorder. By linking user perspectives to core values like identity, autonomy and privacy, the team highlights the need to move beyond generic chatbot design towards solutions tailored to specific needs and grounded in value-sensitive principles.
Users’ Mental Health Needs and LLMs
This study comprehensively investigates how individuals perceive and interact with large language models (LLMs), specifically ChatGPT and Claude, in the context of their mental health. Researchers aimed to understand the diverse ways people utilize these models, identifying associated benefits, risks, and ethical considerations, focusing on understanding users’ lived experiences. The study analyzed publicly available posts from social media platforms to gather data on LLM use and mental health. The research team employed an LLM-assisted annotation process, utilizing GPT-4 to extract key information from each post, including the chatbot used, any mentioned mental health condition, the overall sentiment expressed, and the user’s perspective categorized using a detailed codebook developed through thematic analysis.
This analysis revealed patterns and insights into how users experience these technologies. The most frequent themes identified included instrumental support, where users leverage LLMs for practical tasks like organizing thoughts and accessing information, suggesting potential for self-management. Emotional support also emerged as significant, with users seeking comfort and companionship, raising questions about potential attachment and associated risks. Informational support was also common, though concerns about misinformation were present. A notable proportion of users expressed clinical skepticism, demonstrating healthy critical thinking, while a smaller percentage reported negative impacts on their mental health, highlighting the need for further investigation.
Concerns about dependency on LLMs for emotional support or problem-solving also surfaced, suggesting potential for unhealthy coping mechanisms. Researchers addressed ethical considerations by emphasizing the use of publicly available data with all identifying information removed, securing Institutional Review Board (IRB) approval, and adopting a conservative approach to privacy and data security. This research provides valuable insights into the emerging relationship between LLMs and mental health, highlighting the potential benefits and risks, and underscoring the need for responsible development and deployment of these technologies. Further research is needed to understand long-term effects and develop effective interventions.
Social Media Analysis of LLM Mental Health Perceptions
This study pioneers a novel methodology for understanding public perceptions of large language model (LLM) chatbots in the context of mental health support. Researchers systematically collected 129,543 posts from six major social media platforms over a nearly three-year period to capture evolving online discourse, ultimately refining the dataset to 112,698 unique posts. To move beyond simple topic identification, the study developed a detailed conceptual schema for analyzing user-generated content. This schema defines five key labels to capture both the descriptive and experiential dimensions of each post, identifying the chatbot discussed, the user’s sentiment, the relevant mental health condition, the user’s overall perspective, and the underlying human or moral values being expressed.
The core of the analytical process involved an LLM-assisted extraction pipeline, leveraging prompt engineering and advanced techniques to automatically identify and extract these key labels from each social media post. By combining computational methods with a theoretically grounded framework, the study provides a robust and scalable approach to understanding the complex interplay between technology, mental health, and human values. The resulting data reveals condition-specific patterns in user experiences, highlighting the need for tailored chatbot designs that account for diverse needs and values.
Neurodivergent Users Embrace Chatbot Support
This research delivers a detailed mapping of how individuals discuss their experiences with large language model (LLM) chatbots in relation to their mental health. Scientists leveraged data from six major social media platforms to examine user sentiments, perspectives, and underlying values across a range of mental health conditions, employing an LLM-assisted analytical pipeline grounded in Value-Sensitive Design. Results demonstrate a clear condition-specific pattern in user responses to LLM chatbots. Individuals reporting neurodivergent conditions, such as Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD), consistently reported strong positive sentiments and perceived instrumental or appraisal support from the chatbots.
Conversely, users discussing higher-risk disorders, including schizophrenia and bipolar disorder, exhibited more negative sentiments. Further analysis revealed how user perspectives co-occur with fundamental values like identity, autonomy, and privacy. The research team identified a typology of impacts, delineating conditions under which LLMs appear to benefit versus potentially harm users’ emotional well-being. This detailed examination of user-reported experiences provides an evidence-based foundation for understanding the bidirectional impacts of LLMs on mental health in real-world contexts, establishing a crucial link between specific mental health conditions, user perspectives, and the values embedded within LLM design, paving the way for safer and more context-sensitive AI systems.
Neurodivergent Users Value Chatbot Mental Health Support
This research presents a large-scale characterization of public perceptions regarding the use of large language model chatbots for mental health support. By analyzing social media posts, scientists identified condition-specific patterns in user sentiment, revealing that individuals with neurodivergent conditions, such as ADHD and autism spectrum disorder, generally report positive experiences and find the chatbots instrumentally helpful. Conversely, users discussing higher-risk disorders, including schizophrenia, express more negative sentiments and concerns about the accuracy and potential psychological harm of these interactions. The study further demonstrates that user perspectives are closely linked to underlying values, specifically around autonomy, identity, and privacy.
Overall engagement with chatbots tends to be positive, although sentiment can decline over time, highlighting the importance of considering both individual conditions and personal values when designing these systems. Scientists acknowledge limitations stemming from the accuracy of the automated analysis pipeline and suggest future work should focus on refining these tools through improved prompt design and rigorous testing. This research underscores the need for condition-specific and value-sensitive design principles to maximize the benefits and minimize the risks of using large language models in mental healthcare.
👉 More information
🗞 LLM Use for Mental Health: Crowdsourcing Users’ Sentiment-based Perspectives and Values from Social Discussions
🧠 ArXiv: https://arxiv.org/abs/2512.07797