Mental health and neurodivergence-related content on social media showed highly variable accuracy, with misinformation prevalence ranging from 0% to 57% across platforms and topics, according to a systematic review of 27 studies analyzing 5,057 posts.
Researchers conducted a systematic review using MEDLINE Ultimate, APA PsycINFO, CINAHL, and Scopus. Searches were performed on October 1, 2024. Because of heterogeneity in platforms, topics, and evaluation methods, findings were synthesized narratively.
Across 17 studies reporting prevalence, the mean misinformation rate was 26%, with substantial variation by platform and topic. In included studies, TikTok content showed higher misinformation prevalence than YouTube, including 52% for attention-deficit hyperactivity disorder (ADHD)–related videos and 41% for autism-related content. On YouTube, misinformation ranged from 7% for dissociative identity disorder content to 57% for magnetic resonance imaging claustrophobia, with a mean of 22%. Facebook content showed a mean prevalence of 15%, while one study of X reported 19%. YouTube Kids content showed no misinformation for anxiety and depression and 9% for ADHD.
Topic-specific differences were also observed. Neurodivergence-related content showed higher misinformation prevalence than mental health conditions, with autism-related misinformation reported at 40% to 41% and ADHD at 38% to 52%. In contrast, postpartum depression content showed lower misinformation prevalence, ranging from 3% to 8%.
Reliability and quality assessments varied across studies. Full DISCERN scores for YouTube content ranged from approximately 31 to 36, indicating poor reliability. Modified DISCERN scores ranged from 0.4 for TikTok videos on dissociative identity disorder to 3.55 for YouTube videos on agoraphobia, reflecting variability from poor to high reliability. Global Quality Scale scores ranged from poor to moderate overall.
Content produced by professionals was generally more reliable and higher quality than content from nonprofessionals, although some studies reported similar reliability, and others found no differences in quality between uploader types.
Study quality varied, with an average rating of about 65% and values ranging from approximately 41% to 80%. Many studies evaluated content in a single language and did not report interrater reliability.
The review also described substantial variation in evaluation approaches and reporting methods. Definitions used to identify misinformation differed between studies. Most analyses focused on YouTube, with comparatively fewer studies examining X, Facebook, or Instagram.
“There is a need for strengthened content moderation, as well as consistent definitions and measures of mental health misinformation,” wrote lead researcher Alice Carter of the University of East Anglia Norwich Medical School Department of Clinical Psychology and Psychological Therapies, United Kingdom, and colleagues.
The researchers reported no conflicts of interest.
Source: Journal of Social Media Research