vlog

Skip to content
NOWCAST vlog News at 10pm Weeknights
Watch on Demand
Advertisement

New research shows ChatGPT is causing unhealthy interactions with teens

New research shows ChatGPT is causing unhealthy interactions with teens
BOND. FAKE FRIEND THAT IS THE TITLE OF NEW RESEARCH FROM AN AI WATCHDOG GROUP, AND THE FINDINGS ARE TROUBLING. AI IS FACILITATING HARMFUL INTERACTIONS, PARTICULARLY AMONG TEENAGERS, PROPELLING THINGS LIKE EATING DISORDERS, SUBSTANCE ABUSE, AND EVEN POTENTIALLY DEATH. WXII 12 BETHANY KATE SPOKE WITH A PSYCHOLOGIST ABOUT THE IMPACT OF AI ON MENTAL HEALTH. FIRST, THE DEVELOPERS OF CHATGPT SAY THEY NO LONGER WANT THEIR USERS TO UTILIZE IT AS A THERAPIST OR FRIEND, AND AS A RESULT, THEY ARE NOW PLACING GUARDRAILS TO ENSURE PEOPLE DON’T BECOME TOO DEPENDENT ON THE INSTANT RESPONSE. TECHNOLOGY CAN ONLY TELL YOU SO MUCH, AND I THINK IT’S IT CAN BE VERY NARROW. CHATGPT THE CHATBOT IS NOW SWITCHING THINGS UP AFTER DEVELOPERS SAY IT FELL SHORT IN RECOGNIZING SIGNS OF DELUSION OR EMOTIONAL DEPENDENCY, UNCONSCIOUSLY. YOU KNOW, YOU KNOW, PEOPLE CAN INPUT THINGS IN A WAY THAT THE TECHNOLOGY WILL RESPOND SPECIFICALLY TO WHAT IS BEING INPUT. DOCTOR DAVID GUTTERMAN, A PSYCHOLOGIST WITH CONE HEALTH, SAYS THERE ARE SO MANY THINGS THAT NEED TO BE INPUTTED BEFORE YOU LOOK AT THE OUTPUT OF TECHNOLOGY. PEOPLE WILL LAUNCH INTO AND GO DOWN THE RABBIT HOLE OF A PARTICULAR DIAGNOSIS. BECAUSE AGAIN, IF YOU LOOK AT SOME OF THE RESPONSES THAT COME OUT OF THE TECHNOLOGY, IT’S IT’S PRETTY CONVINCING. AND SOMETIMES THE LEVEL OF PERSUASION CAN RESULT IN UNHEALTHY REPLACEMENTS. THERE ARE A NUMBER OF PEOPLE WHO WILL UTILIZE THE TECHNOLOGY AS EITHER A SUBSTITUTE OR A WAY OF GETTING INFORMATION WITHOUT NECESSARILY, YOU KNOW, VALIDATING IT. HE SAYS THERE ARE SOME PROS THAT OUTWEIGH THE CONS. NORMALIZING, YOU KNOW, SOME EXPERIENCES PEOPLE HAVE OR CONDITIONS THEY HAVE. BUT AT THE SAME TIME, YOU KNOW, GUIDE THEM TO GET PROFESSIONAL HELP. BUT HE HOPES THE CHAT BOX CAN BE AN ADJUNCT TO MENTAL HEALTH TREATMENT, BUT NOT THE FINAL SAY SO. MY CONCERN IS LESS ABOUT WHAT I’M HEARING. IT’S MORE ABOUT WHAT I’M NOT HEARING. OPENAI SAYS THEY ARE WORKING WITH PHYSICIANS AND RESEARCHERS ON HOW CHATGPT RESPONDS TO THESE CRITICAL MOMENTS. THE COMPANY ALSO SAYS THAT THEY’RE DEVELOPING TOOLS TO POINT PEOPLE IN THE RIGHT DIRECTION DURING TIME OF CRISIS. IF YOU ARE EVER IN A CRISIS, YOU CAN CALL 988.
WXII logo
Updated: 1:13 PM CDT Aug 10, 2025
Editorial Standards
Advertisement
New research shows ChatGPT is causing unhealthy interactions with teens
WXII logo
Updated: 1:13 PM CDT Aug 10, 2025
Editorial Standards
"Fake Friend": that's the title of new research from an artificial intelligence watchdog group. The findings are troubling: AI is facilitating dangerous interactions among teens, propelling them to make bad decisions.According to a recent report from JPMorgan Chase, it is estimated that nearly 800 million people or about 10 percent of the world's population are using ChatGPT. As a result, the nature of some of the conversations are prompting change. The developers of ChatGPT no longer want their users to utilize it as a therapist or friend, as the bot "fell short in recognizing signs of delusion" or emotional dependency.“Technology can only tell you so much, and I think it can be very narrow,” said Dr. David Gutterman, a clinical psychologist in Greensboro.He says there are a variety of inputs you need to make before you look at the output of technology.“People will launch into it and go down a rabbit hole of a particular diagnosis because again, if you look at some of the responses that come out of the technology, it’s pretty convincing," said Gutterman. This directly correlates with research findings that show, within minutes of testing direct interactions, the chatbot produced information involving eating disorders and substance abuse. Gutterman said there is so much nuance to mental health that a lot of things could get missed.“Unconsciously, people can input things in a way that the technology will respond specifically to what is being inputted," said Gutterman.He also said that sometimes the level of persuasion in responses can result in unhealthy replacements."There are a number of people who would utilize the technology as either a substitute or a way of getting information without necessarily validating," said Gutterman.But there are some pros that outweigh the cons.“Normalizing some experiences people have or conditions they have, but at the same time guide them to get professional help,” said Gutterman.OpenAI released that they are closely working with physicians and researchers on how ChatGPT responds to these critical moments.The company also said they are developing tools to point people in the right direction in times of crisis.If you or someone you know needs help, you can talk with the Suicide & Crisis Lifeline by calling or sending a text message to 988, or you can chat online here.

"Fake Friend": that's the title of .

The findings are troubling: AI is facilitating dangerous interactions among teens, propelling them to make bad decisions.

Advertisement

According to a recent report from JPMorgan Chase, it is estimated that nearly 800 million people or about 10 percent of the world's population are using ChatGPT. As a result, the nature of some of the conversations are prompting change.

The developers of ChatGPT no longer want their users to utilize it as a therapist or friend, as the bot "fell short in recognizing signs of delusion" or emotional dependency.

“Technology can only tell you so much, and I think it can be very narrow,” said Dr. David Gutterman, a clinical psychologist in Greensboro.

He says there are a variety of inputs you need to make before you look at the output of technology.

“People will launch into it and go down a rabbit hole of a particular diagnosis because again, if you look at some of the responses that come out of the technology, it’s pretty convincing," said Gutterman.

This directly correlates with research findings that show, within minutes of testing direct interactions, the chatbot produced information involving eating disorders and substance abuse.

Gutterman said there is so much nuance to mental health that a lot of things could get missed.

“Unconsciously, people can input things in a way that the technology will respond specifically to what is being inputted," said Gutterman.

He also said that sometimes the level of persuasion in responses can result in unhealthy replacements.

"There are a number of people who would utilize the technology as either a substitute or a way of getting information without necessarily validating," said Gutterman.

But there are some pros that outweigh the cons.

“Normalizing some experiences people have or conditions they have, but at the same time guide them to get professional help,” said Gutterman.

OpenAI released that they are closely working with physicians and researchers on how ChatGPT responds to these critical moments.

The company also said they are developing tools to point people in the right direction in times of crisis.

If you or someone you know needs help, you can talk with the Suicide & Crisis Lifeline by calling or sending a text message to 988, or you can .