Benton TD, Boyd RC, Njoroge WF. Addressing the global crisis of child and adolescent mental health. JAMA Pediatrics. 2021;175:1108–10.
McGorry PD, Mei C, Chanen A, Hodges C, Alvarez‐Jimenez M, Killackey E. Designing and scaling up integrated youth mental health care. World Psychiatry. 2022;21:61–76.
Oswalt SB, Lederer AM, Chestnut-Steich K, Day C, Halbritter A, Ortiz D. Trends in college students’ mental health diagnoses and utilization of services, 2009-15. J Am Coll Health. 2020;68:41–51.
CDC, Improving Access to Children’s Mental Health Care, https://www.cdc.gov/childrensmentalhealth/access.html.
Lattie EG, Stiles-Shields C, Graham AK. An overview of and recommendations for more accessible digital mental health services. Nat Rev Psychol. 2022;1:87–100.
Lehtimaki S, Martic J, Wahl B, Foster KT, Schwalbe N. Evidence on digital mental health interventions for adolescents and young people: systematic overview. JMIR Ment Health. 2021;8:e25847.
Nisenson M, Lin V, Gansner M. Digital phenotyping in child and adolescent psychiatry: a perspective. Harv Rev Psychiatry. 2021;29:401–8.
Currey D, Hays R, Torous J. Digital phenotyping models of symptom improvement in college mental health: generalizability across two cohorts. J Technol Behav Sci. 2023;8:368–81.
Martinez-Martin N, Greely HT, Cho MK. Ethical development of digital phenotyping tools for mental health applications: Delphi study. JMIR Mhealth Uhealth. 2021;9:e27343.
Reiley, L. Opinion | What My Daughter Told ChatGPT Before She Took Her Life. The New York Times. 2025. https://www.nytimes.com/2025/08/18/opinion/chat-gpt-mental-health-suicide.html, with suing from parents.
Hill, K. A Teen Was Suicidal. ChatGPT Was the Friend He Confided In. The New York Times. 2025. https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html.
Collins B, Klein E. Invasive neurotechnology: a study of the concept of invasiveness in neuroethics. Neuroethics. 2023;16:11.
Kučević E, Brackel-Schmidt CV, Lewandowski T, Leible S, Memmert L, Böhmann T. The prompt-a-thon: Designing a format for value Co-creation with generative AI for research and practice. Proceedings of the 57th Annual Hawaii International Conference on System Sciences. Honolulu, HI: Department of IT Management Shidler College of Business University of Hawaii; 2024.
Corrêa NK, Galvão C, Santos JW, Del Pino C, Pinto EP, Barbosa C, et al. Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns. 2023;4:100857.
Ethics guidelines for trustworthy AI | Shaping Europe’s digital future. (n.d.). Retrieved September 4, 2025, from https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.
National Conference of State Legislatures, Artificial Intelligence 2025 Legislation (July 10, 2025), https://www.ncsl.org/technology-and-communication/artificial-intelligence-2025-legislation.
Office of the Surgeon General (OSG). (2021). Protecting Youth Mental Health: The U.S. Surgeon General’s Advisory. US Department of Health and Human Services. http://www.ncbi.nlm.nih.gov/books/NBK575984/.
Creswell C, Shum A, Pearcey S, Skripkauskaite S, Patalay P, Waite P. Young people’s mental health during the COVID-19 pandemic. Lancet Child Adolesc Health. 2021;5:535–7. https://doi.org/10.1016/S2352-4642(21)00177-2.
American Psychological Association. (2022). Psychologists struggle to meet demand amid mental health crisis: 2022 COVID19 Practitioner Impact Survey. https://www.apa.org/pubs/reports/practitioner/2022-covid-psychologist-workload.
Sun C-F, Correll CU, Trestman RL, Lin Y, Xie H, Hankey MS, et al. Low availability, long wait times, and high geographic disparity of psychiatric outpatient care in the US. Gen Hosp Psychiatry. 2023;84:12–7. https://doi.org/10.1016/j.genhosppsych.2023.05.012.
National Institute of Mental Health. (2022). Mental Illness. National Institute of Mental Health. https://www.nimh.nih.gov/health/statistics/mental-illness.
Philippe TJ, Sikder N, Jackson A, Koblanski ME, Liow E, Pilarinos A, et al. Digital health interventions for delivery of mental health care: systematic and comprehensive meta-review. JMIR Ment Health. 2022;9:e35159. https://doi.org/10.2196/35159.
Harrison V, Proudfoot J, Wee PP, Parker G, Pavlovic DH, Manicavasagar V. Mobile mental health: Review of the emerging field and proof of concept study. J Ment Health. 2011;20:509–24. https://doi.org/10.3109/09638237.2011.608746.
Haque MDR, Rubya S. An overview of chatbot-based mobile mental health apps: insights from app description and user reviews. JMIR Mhealth Uhealth. 2023;11:e44838. https://doi.org/10.2196/44838.
Abd-Alrazaq AA, Alajlani M, Abdallah Alalwan A, Bewick BM, Gardner P, Househ M. An overview of the features of chatbots in mental health: A scoping review. Int J Med Inform. 2019;132:103978. https://eprints.whiterose.ac.uk/151992/.
Olawade DB, Wada OZ, Odetayo A, David-Olawade AC, Asaolu F, Eberhardt J. Enhancing mental health with Artificial Intelligence: Current trends and future prospects. J Med Surg Public Health. 2024;3:100099. https://doi.org/10.1016/j.glmedi.2024.100099.
Hoffman BD, Oppert ML, Owen M. Understanding young adults’ attitudes towards using AI chatbots for psychotherapy: The role of self-stigma. Comput Hum Behav: Artificial Humans. 2024;2:100086. https://doi.org/10.1016/j.chbah.2024.100086.
Siddals S, Torous J, Coxon A. “It happened to be the perfect thing”: Experiences of generative AI chatbots for mental health. Npj Ment Health Res. 2024;3:1–9. https://doi.org/10.1038/s44184-024-00097-4.
Lawrence HR, Schneider RA, Rubin SB, Matarić MJ, McDuff DJ, Bell MJ. The opportunities and risks of large language models in mental health. JMIR Ment Health. 2024;11:e59479.
Guo Z, Lai A, Thygesen JH, Farrington J, Keen T, Li K. Large language models for mental health applications: systematic review. JMIR Ment Health. 2024;11:e57400.
ChatGPT Gave Instructions for Murder, Self-Mutilation, and Devil Worship—The Atlantic. (n.d.). Retrieved September 4, 2025, from https://www.theatlantic.com/technology/archive/2025/07/chatgpt-ai-self-mutilation-satanism/683649/.
Lawsuit claims Character.AI is responsible for teen’s suicide. (2024, October 23). NBC News. https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791.
Baidal, M, Derner E, Oliver N. “Guardians of trust: Risks and opportunities for llms in mental health.” In Proceedings of the Fourth Workshop on NLP for Positive Impact (NLP4PI), pp. 11-22. 2025.
Kahane K, Shumate JN, Torous J. Policy in flux: addressing the regulatory challenges of AI integration in US mental health services. Current Treatment Options in Psychiatry. 2025;12:24.
Shen FX, Baum ML, Martinez-Martin N, Miner AS, Abraham M, Brownstein CA, et al. Returning individual research results from digital phenotyping in psychiatry. Am J Bioeth: AJOB. 2024;24:69–90. https://doi.org/10.1080/15265161.2023.2180109.
Ienca M, Haselager P, Emanuel E. Brain leaks and consumer neurotechnology. Nat Biotechnol. 2018;36:805–10. https://doi.org/10.1038/nbt.4240.
Mihan A, Pandey A, Van Spall HGC. Artificial intelligence bias in the prediction and detection of cardiovascular disease. npj Cardiovasc Health. 2024;1:31 https://doi.org/10.1038/s44325-024-00031-9.
Saeed SA, Masters RM. Disparities in health care and the digital divide. Curr Psychiatry Rep. 2021;23:61. https://doi.org/10.1007/s11920-021-01274-4.
Goering S, Klein E, Specker Sullivan L, Wexler A, Agüera Y Arcas B, Bi G, et al. Recommendations for responsible development and application of neurotechnologies. Neuroethics. 2021;14:365–86. https://doi.org/10.1007/s12152-021-09468-6.
Tom E, Aurum A, Vidgen R. An exploration of technical debt. J Syst Softw. 2013;86:1498–516. https://doi.org/10.1016/j.jss.2012.12.052.
Yuste R. Advocating for neurodata privacy and neurotechnology regulation. Nat Protoc. 2023;18:2869–75. https://doi.org/10.1038/s41596-023-00873-0.
Wadden JJ. Defining the undefinable: the black box problem in healthcare artificial intelligence. J Med Ethics. 2022;48:764–8.
Park PS, Goldstein S, O’Gara A, Chen M, Hendrycks D. AI deception: A survey of examples, risks, and potential solutions. Patterns. 2024;5:100988 https://doi.org/10.1016/j.patter.2024.100988.
Wu, X, Zhao H, Zhu Y, Shi Y, Yang F, Hu L, et al. “Usable XAI: 10 strategies towards exploiting explainability in the LLM era.” arXiv preprint arXiv:2403.08946 [Preprint]. 2024 https://arxiv.org/abs/2403.08946.
Barman KG, Wood N, Pawlowski P. Beyond transparency and explainability: on the need for adequate and contextualized user guidelines for LLM use. Ethics Inf Technol. 2024;26:47.
Mitchell M, Wu S, Zaldivar A, Barnes P, Vasserman L, Hutchinson B, et al. Model cards for model reporting. proceedings of the conference on fairness, Accountability and Transparency 2019; 220-9. https://doi.org/10.1145/3287560.3287596.
Gallifant J, Afshar M, Ameen S, Aphinyanaphongs Y, Chen S, Cacciamani G, et al. The TRIPOD-LLM reporting guideline for studies using large language models. Nat Med. 2025;31:60–9. https://doi.org/10.1038/s41591-024-03425-5.