Researchers are beginning to understand how individuals perceive and interact with artificial intelligence offering emotional support, a growing trend outside traditional healthcare. Elham Aghakhani and Rezvaneh Rezapour, both from Drexel University, alongside et al., investigated over 5,000 Reddit posts across 47 mental health communities to explore user experiences with AI for support or therapy. Their analysis, grounded in established psychological models, reveals that successful engagement hinges on practical outcomes, trust, and the quality of AI responses , not simply emotional connection. This work is significant because it demonstrates a robust method for analysing large-scale online discourse about sensitive topics and underscores the crucial need to study how people interpret and relate to these increasingly prevalent language technologies in real-world mental wellbeing contexts.

This work addresses a critical gap in knowledge, how people actually evaluate and relate to these systems outside of controlled clinical environments. The team achieved this by developing a theory-informed annotation framework grounded in the Technology Acceptance Model and therapeutic alliance theory, allowing for a detailed analysis of evaluative language, adoption attitudes, and relational alignment at scale.

A hybrid pipeline combining Large language models and human annotation was then employed to process the substantial dataset of Reddit posts, enabling the researchers to move beyond simple sentiment analysis and delve into the nuances of user perceptions. This innovative methodology operationalises established psychological constructs for large-scale Natural language processing, offering a robust and scalable method for studying human-AI interaction in sensitive contexts. The study reveals that sustained engagement with AI for mental health support is primarily driven by tangible outcomes, trust in the system, and the quality of its responses, rather than solely by emotional connection. Positive sentiment was most strongly correlated with task and goal alignment, indicating that users value AI that effectively addresses their needs and provides practical assistance.
Conversely, companionship-focused use frequently involved misaligned alliances and reported risks, including dependence and symptom escalation, highlighting the potential pitfalls of relying on AI for purely emotional fulfillment. Notably, the research establishes a conditional relationship between therapeutic alliance and engagement; a strong alliance is built on shared tasks and goals, while emotional bonding alone is a weak predictor of positive outcomes and can even be detrimental. This work makes three key contributions: a unique dataset capturing user evaluations of AI support, a theory-grounded approach to analysing adoption attitudes and relational alignment, and quantitative evidence linking system characteristics to user engagement and perceived risk, ultimately informing the design and evaluation of more effective AI-supported mental health tools. The dataset, annotation guidelines, and code will be released to facilitate further research in this rapidly evolving field.

Reddit Data Collection and Preprocessing for AI Perceptions

Researchers embarked on a large-scale analysis of online mental health discourse, examining 5,126 Reddit posts from 47 distinct mental health communities to understand user perceptions of AI-based emotional support systems. The study meticulously curated these subreddits, guided by DSM-5 diagnostic categories, including depression, anxiety, and bipolar disorder, ensuring comprehensive coverage of relevant online discussions and incorporating general mental health forums like r/mentalhealth and r/TalkTherapy for broader insights. Data collection, utilising the ArcticShift API, initially yielded 4,703,056 submissions between November 30, 2022, and August 15, 2025, which were then subjected to rigorous preprocessing to enhance data quality and remove irrelevant content. To pinpoint posts specifically discussing AI tools for therapeutic purposes, the team pioneered a hybrid LLM-human pipeline, beginning with keyword-based retrieval across a randomly sampled 120,000 posts.
GPT-4o mini was initially employed to classify posts, identifying those referencing AI for emotional support, therapy, or mental health guidance, and subsequently extracting approximately 800 unique AI-related terms, later refined to a list of 146 keywords through manual review and deduplication. This keyword application generated a corpus of 572,734 posts, which underwent further validation using GPT-4o mini on a 10,000-post sample, achieving high agreement with human judgements (Fleiss’ κ = 0.90) before being applied to the entire filtered dataset, ultimately resulting in 6,206 posts relevant to AI-mediated mental health support. Following initial filtering, a random sample of 200 posts was manually reviewed to categorise content into experiential, exploratory, advertisement, and irrelevant types, with strong agreement achieved between human annotators and the LLM classifications (Fleiss’ κ = 0.78). The researchers retained only experiential and exploratory posts, those detailing personal AI use or seeking advice, resulting in a final dataset of 5,126 posts for in-depth analysis, allowing them to focus on user perceptions of AI as a therapeutic tool.

This meticulous approach enabled the team to operationalise theory-grounded constructs within large-scale discourse analysis, revealing nuanced understandings of how users interpret and relate to language technologies in sensitive real-world contexts. The study developed a detailed annotation framework grounded in Therapeutic Alliance Theory and the Technology Acceptance Model, capturing both pragmatic and relational aspects of AI-mediated support. Specifically, the framework assessed dimensions like perceived usefulness, ease of use, intention to continue use, and perceived trust, alongside components of the therapeutic alliance, bond, task, and goal, allowing for a nuanced evaluation of user experiences and attitudes towards AI in mental health contexts; these criteria were measured using categorical and descriptive scales to ensure comprehensive data capture. This innovative methodology facilitated a deeper understanding of the factors shaping engagement with AI tools for emotional support, highlighting the importance of narrated outcomes, trust, and response quality over emotional connection alone.

AI Support Success Relies on Outcomes and Trust

Scientists analysed 5,126 Reddit posts from 47 mental health communities, focusing on experiential or exploratory use of artificial intelligence for emotional support or therapy. Grounded in the Technology Acceptance Model and therapeutic alliance theory, the team developed a theory-informed annotation framework and applied a hybrid LLM-human pipeline to analyse evaluative language and relational alignment at scale. Results demonstrate that engagement is primarily shaped by narrated outcomes, trust, and response quality, rather than emotional bond alone, a key finding for understanding user interaction. Experiments revealed that positive sentiment is most strongly associated with task and goal alignment, indicating users value practical help from AI systems.

Conversely, companionship-oriented use more often involves misaligned alliances and reported risks, such as dependence and symptom escalation, highlighting potential harms of solely seeking emotional connection. The research team meticulously identified posts describing AI use between November 2022 and August 2025, curating a dataset of 5,126 posts specifically focused on AI-supported mental health care. This detailed analysis allowed for the capture of pragmatic evaluations, including usefulness, trust, and outcome quality, alongside dimensions of task, goal, and bond alignment. Measurements confirm that sustained engagement with AI for mental health support hinges on demonstrable outcomes, trust, and the quality of responses, not simply emotional bonding.

Data shows a conditional therapeutic alliance, where positive engagement correlates strongly with task and goal alignment, while emotional bond alone exhibits a weak relationship with positive sentiment and frequently accompanies dependency and reported harm. The study’s breakthrough delivers a novel dataset capturing user evaluations of AI systems for emotional support, alongside a theory-grounded operationalisation of key psychological constructs for large-scale natural language processing analysis. Tests prove that the developed annotation schema effectively integrates constructs from the Technology Acceptance Model and therapeutic alliance theory, enabling the capture of nuanced user attitudes and relational dynamics. Quantitative and qualitative evidence links system properties to adoption sentiment and therapeutic alliance, informing the design and evaluation of AI-supported mental health tools, a significant advancement in the field. The work highlights the importance of studying how users interpret language technologies in sensitive, real-world contexts, offering valuable insights for responsible AI development and deployment.

LLM Acceptance Driven By Outcomes And Trust

Scientists have investigated how individuals perceive and interact with large language models (LLMs) when used for emotional support and mental wellbeing outside of traditional clinical environments. Researchers analysed 5,126 Reddit posts from 47 mental health communities to understand experiential and exploratory use of LLMs for these purposes. Grounded in the Technology Acceptance Model and therapeutic alliance theory, they developed a framework to analyse evaluative language, attitudes towards adoption, and relational dynamics at scale. The findings demonstrate that user engagement is primarily influenced by reported outcomes, trust in the system, and the quality of responses, rather than solely by the development of an emotional connection.

Positive experiences were most strongly linked to task and goal alignment, while companionship-focused use more frequently involved misaligned relationships and potential risks like dependence or worsening symptoms. This work highlights the importance of understanding how users interpret language technologies within sensitive, real-world contexts, emphasising that how people live with AI is as crucial as what the AI delivers. This study acknowledges limitations stemming from the use of Reddit data, which may reflect demographic and cultural biases specific to the platform and its user base. The analysis was restricted to English-language posts, potentially excluding diverse perspectives, and relied on self-reported experiences, which may not represent the broader population or establish causal relationships. Furthermore, the adaptation of established theories, the Technology Acceptance Model and therapeutic alliance theory, to discourse analysis means the findings illustrate how constructs surface in user narratives rather than formally validating the theories themselves. Future research could combine discourse analysis with longitudinal data, interviews, or diary studies to better understand escalation, disengagement, and the long-term effects of AI use on individuals.

Comments are closed.