How AI can improve childhood mental health care | Bryn Loftness | TEDxUniversityofMississippi

How can a young child be expected to communicate the intricacies of their feelings? How do parents determine if a child’s behavior should be of greater concern? Bryn Loftness unveils AI-driven solutions designed to help identify and predict mental health concerns in young children. Listen and explore how technology is revolutionizing early detection, providing invaluable insights to parents and providers, and nurturing emotional well-being from the earliest years.

Bryn Loftness is a passionate researcher addressing the existing gaps in current pediatric and family-care mental health resources. Loftness develops machine algorithms integrating wearable sensors, health records, and off-body sensors to unlock insights into human behavior and well-being. She envisions the application of these new technologies revolutionizing the modern mental health care system, starting at the crucial early stages of life when children have the greatest potential for long-term success. Loftness is currently a PhD candidate in the Complex Systems and Data Science program at University of Vermont and a National Science Foundation Graduate Research Fellow. She also holds a Visiting Researcher position in the Sabeti Lab and the Brown University MAPPS Center through her affiliation with the Broad Institute of MIT and Harvard. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

37 Comments

  1. Tracked mental health will likely ensure that once adults that these kiddos remain labeled as mental health patients. Health system stigma will be the result.

  2. Stable family, not exposing them to the digital world from the offset, healthy relationships with their peers and good parental discipline and diet should be enough. We need to rewind time. No one can ever tell me AI is a good thing ultimately. Its artificial, fake, unoriginal. It negates the use of our own brains and discourages to push it to its limit. A mere extension of memory and consciousness. Absolutely F*CKED!!

  3. I can't help but perceive a determined effort to try and 'sell' AI to us laypersons in every AI-positive piece that I increasingly see.

    In a world which holds the powerful accountable and strives to maintain a degree of ethics over material gain in all walks of life, I have no qualms with using AI to its full potential. We do not live in that world.

    I KNOW that AI will be used to manipulate me, to deprieve me, to further exploit me and try to make me exploit others. I know it will be used to turn me further into cattle. To allow those already rich and influential to maintain their livelihoods at my expense with greater facility. Until they don't need the lower classes AT ALL to maintain themselves. What happens to me then?

    No, I don't think I'll come around. Sorry you all invested in this being the new tech boom and all its done so far is put a middleman inbetween me and google results. Consider taking the L and moving on.

  4. Kids need LESS time spent on screens so they can properly develop, not more. We've stunted 2 generations by keeping them glued to screens and algorithms instead of learning interpersonal skills and conflict resolution… the result is generations of young people who can't take even very mild criticism. It is destructive.

  5. Repeat after me: The problems with mental health are smartphones, social media, and algorithms.

    Adding more to this is like adding another hard drug to an addict, and expecting them to get better. It will not, it will only make it worse.

  6. Suddenly AI is God, the world is saved we have AI. FK RIGHT OFF. If AI is that good, how did it learn if 90% of mental health professionals know that diagnosis is not accurate, how is AI any better?

  7. What if this AI was in the classrooms, where 1 teacher has to see, diagnose and keep track of 25, 30 maby 40 children simultaneously. Maybe se how one child acts in the leadership of a teacher versus an other.

  8. I am not against using AI as a tool within the mental health field. However, using it to detect potential mental health in children is an issue in my opinion. First of all it places the blame within the child and pathologises their experience. Second humans are highly suggestible creatures, if you tell someone they have something convincingly or enough times, they will then start exhibiting that same thing. Behaviours is communication i absolutely agree. So what is it communicating? I dont feel safe, loved, understood, i am in pain, i am lonely… so what the solution , changing the child? That's a recipe for a very unwell person. Change the environment, change schools, change workplaces, change the public spaces, not the child.. instead teach them how to regulate their emotions, express themselves in a productive manner, allow them to be kids, make their world safer and allow them to be able to learn, experience and grow. This attitude towards AI in mental health by many will create a lot of people who will not be able to function without a mental health service. Which the US mental health system, as far as i an aware, is practically non-existent unless you got $ and the UK is overwhelmed and drowning and will soon mimic US and be only for those who can afford it.
    There is a lot to unpack in this issue and should not be presented in a naive manner like this
    (Also feeling a bit annoyed at the tone of this woman – but that's probably me being annoyed :p )

Leave A Reply