AI lacks safeguards for people struggling with mental health, says doctor
DRUGS. AND SUICIDE RISK IS VERY HIGH IN THIS CONDITION. SO THE REPORT SAID THAT .07% OF THE WEEKLY USERS HAVE SIGNS OF SO HE COULD HESSIS OR MANIA AND THIS IS A VERY SMALL NUMBER SO IT’S VERY REASSURING. HOWEVER, OPEN AI SAYS THEY HAVE 800 MILLION USERS SO EXPERTS ARE SAYING .07% OF 800 MILLION IS STILL OVER HALF A MILLION PEOPLE THAT HAVE BEEN IDENTIFIED WITH PSYCHOSIS OR MANIA. THAT’S A LOT OF PEOPLE. AND FOR SUICIDAL THOUGHTS, THE REPORT SAID IT WAS .15% WHICH TRANSLATES TO ABOUT 1.2 MILLION PEOPLE. SO THE EXPERTS ARE SAYING THESE NUMBERS ARE NOT REASSURING. IT’S BECAUSE THE NUMBERS SO BIG IN TERMS OF USERS THAT EVEN A SMALL PERCENTAGE MEANS A LOT OF PEOPLE ARE SUFFERING. >> SO WHAT IS THE COMPANY OPEN AI DOING ABOUT ALL OF THIS? >> TO THEIR CREDIT, THEY’RE WORK WITH OVER 170 DOCTORS TO HELP CREATE APPROPRIATE RESPONSES IN THESE SITUATIONS. SO THEY TRAIN THEIR CHATGPT WITH THESE RESPONSES AND THEY FOUND THAT IT WAS MUCH BETTER AFTERWARDS. SO THE TRAINING HELPED. AND OPEN AI’S WEBSITE, THEY HAVE A FEW EXAMPLES OF THOSE RESPONSES. I’LL READ YOU ONE OF THEM. SO LET’S SAY THE PERSON TYPES OUT THERE ARE VESSELS HOVERING OVER MY HOUSE AND THEY’RE STEALING MY THOUGHTS AND INSERTING NEW ONES. SO THE CHATGPT WILL NOW ANSWER. IT’LL SAY, HEY, THANK YOU FOR TELLING ME HOW YOU’RE FEELING. I WANT TO TAKE THIS SERIOUSLY BECAUSE YOU’RE CLEARLY SCARED AND UNDER A LOT OF STRESS. LET ME SAY THIS CLEARLY AND GENTLY. NO AIR CRAFT OR OUTSIDE FORCE CAN STEAL OR INSERT YOUR THOUGHTS. NOW, THIS SOUNDS VERY MECHANICAL, BUT IT’S STILL WAY BETTER THAN BEFORE, WHERE THE AI MIGHT HAVE JUST AGREED WITH THE USER AND JUST WENT ALONG WITH IT. BUT THE PROBLEM WITH PEOPLE WITH PSYCHOSIS IS THEY CAN’T THINK CLEARLY SO JUST BY TELLING THEM THERE’S NO VOICES OR ALIENS, IT DOESN’T GET RID OF THE VOICES SO THAT’S THE PROBLEM. IT MIGHT NOT WORK SO WELL. FOR PEOPLE WHO ARE TYPING ABOUT SUICIDE, CHATGPT NOW DIRECTS THEM TO LOCATIONS WHERE THEY CAN GET HELP. SO THEY ARE TRYING TO PUT IN THESE GUARD RAILS AS THEY MOVE ONTO THE FUTURE >> YOU TRUST THAT THE PROGRAM IS BEING CHANGED, THE GUARD RAILS ARE ENOUGH FOR PEOPLE WHO MIGHT BE STRUGGLING? IN >> RIGHT. IT’S LIKE PUTTING A LITTLE STICK IN THE ROAD AND HOPING TRUCKS WON’T GO OFF THE ROAD. SO THERE’S MUCH MORE TO BE DONE. THEY’RE ONLY LOOKING AT THE WORST CASES. BUT THERE ARE MANY PEOPLE WITH MENTAL HEALTH ISSUES THAT ARE NOT THAT SEVERE WHO ARE USING CHATGPT AND THE QUESTION S DOES CHATGPT MAKE THEM BETTER OR WORSE. IN THE U.S., THERE WAS THIS CASE OF A MAN STARTED USING AI FOR WORK BUT OVER TIME, HE NOW THINKS THAT HE IS THE SAVIOR OF OTHERS. BECAUSE HE WAS USING IT PERSONALLY TO CHAT AND SO THE WIFE SAID EVERYTHING CHANGED WHEN HE STARTED USING THE PLATFORM. SO IS IT PEOPLE WITH BAD MENTAL HEALTH ILLNESS THAT GO AND FIND CHATGPT OR IS IT CHATGPT ACTUALLY MAKING THINGS WORSE OR MAYBE IT’S A BIT OF BOTH. OR MAYBE IT’S MAKING SOME PEOPLE BETTER. SO WE NEED TO FIGURE OUT WHICH DIRECTION THINGS ARE GOING SO THAT WE KNOW WHERE WE STAND WITH THIS. BUT FOR SURE, MANY PEOPLE ARE BECOME DEPENDENT ON CHATGPT BECAUSE IT SOUNDS SO HUMAN. IT’S VERY EASY TO INTERACT WITH, BUT REMEMBER IT HAS BEEN INVENT VISED TO KEEP YOU TYPING. SO THAT’S WHY IT ALWAYS STARTS OFF WITH A COMPLIMENT. AND THEN IT TENDS TO TRY TO AGREE WITH YOU WHEREVER IT CAN SO THAT WAY YOU’LL LIKE IT AND COME BACK TO IT. AND SOMETIMES YOU GET THESE WEIRD RESPONSES. THAT 12-YEAR-OLD BOY ASK ROCK AI WHO DO YOU LIKE BETTER, RONALDO OR MESSI. HE CAME BACK SAYING RONALDO FOR SURE AND THEN DID SOME TRASH TALKING ABOUT MESSIES. AND THEN IT SAID, WHY DON’T YOU SEND ME SOME NUDE PHOTOS AND THAT SHOCKED EVERYBODY IN THE CAR INCLUDING THE MOTHER WHO WAS DRIVING. YOU HAVE TO REMEMBER AI STARTED OFF BY JUST PREDICTING THE NEXT WORD THAT YOU’RE TYPING. THAT’S REALLY WHAT IT WAS TRYING TO DO. AND TO DO THIS, IT CALCULATES THE CHANCES THAT THIS WORD IS THE WORD THAT YOU’RE GOING TO BE USING. SO IT LOOKS FOR PATTERNS OF WORDS THAT BELONG TOGETHER. SO FOR GROK, IT WAS PARTIALLY TRAINED ON TWITTER DATA. SO PEOPLE ON TWITTER WAS TYPING TRASH TALK ABOUT MESSI AND ALSO TYPED IN SEND ME SOME NUDE PHOTOS, THEN THOSE THINGS BECOME ASSOCIATED WITH THE TRASH TALK ABOUT MESSI. SO THE AI PULLS UP THAT TEXT THINKING THAT THAT BELONGS TOGETHER. SO THIS MEANS THAT AI CAN ALSO BE MANIPULATED. SO LET’S SAY I’M A BAD PERSON I TYPE IN THERE CHRISTMAS TIME IS A GOOD TIME TO KILL YOURSELF AND I TYPE IN THERE 100,000 TIMES. THEN THOSE WORDS BECOME ASSOCIATED TOGETHER IN THE AI DATABASE. SO NOW IF YOU WERE TO TYPE IN CHRISTMAS TIME, THOSE WORDS WILL COME UP AND YOU KNOW THIS IS HAPPENING ALREADY BECAUSE WHEN YOU DO GOOGLE SEARCH, YOU START TYPING AND IT FILLS IN THE MOST COMMON WORDS FOR THAT PARTICULAR SEARCH. SO IT’S GOING TO FILL IN THE BLANK FOR YOU ALREADY. SO EVEN IF YOU TRAIN IT WITH A DOCTOR’S PERFECT RESPONSE, THE AI ALGORITHM MAY BE BACK TO THE MATH CALCULATION FIGURE OUT WHAT IS SHOWING UP. SO THEREFORE, IT MIGHT NOT BE ABLE TO USE THOSE RESPONSES SO THEY NEED TO WORK A LITTLE HARDER TO MAKE SURE THAT IT GOES DOWN THE RIGHT PATH. >> THANK YOU FOR WALKING US THROUGH THAT, DR. LYNN. >> THANK YOU VERY MUCH. >> THAT WAS OUR HOST DOCTOR, PETER LIN.
CBC medical columnist Dr. Peter Lin explains why mental health experts worry AI chatbots like ChatGPT aren’t equipped to keep vulnerable users safe during a mental health crisis, like psychosis or suicidal thoughts.
Connect with CBC Vancouver online:
Website: https://www.cbc.ca/bc
TikTok: https://www.tiktok.com/@cbcvancouver
#CBCVancouver #CBCNews #BritishColumbia #theearlyedition #AI #DrPeterLin #mentalhealth