news

'I thought my daughter and I were close. Then I found out who she was really opening up to.'

Want to support independent women's media? Become a Mamamia subscriber. Get an all-access pass to everything we make, including exclusive podcasts, articles, videos and our exercise app, MOVE.

"I don't want to burden a human being with my problems."

Those are the words Lisa's* 15-year-old daughter told her, when the teen shared she had been using ChatGPT as an online therapist.

The revelation came around two months ago and Lisa has been grappling with what to do ever since.

Does she give her daughter the space and time to work out what she's experiencing independently, or does she rush in to intervene?

"It makes me feel quite sad," Lisa told Mamamia, explaining her reaction to hearing her child say she didn't want to add to anyone's stress.

"She's always generally been quite an anxious person, so I think she's just trying to get her head around the hormonal changes and the general teenage stuff really."

Most of her daughter's queries to ChatGPT — as far as Lisa is aware — are around the nuances of navigating social dynamics, as the teen is in a "group of three."

School children. Lisa's daughter is in a group of three. It often makes her feel on the outskirts. Image: Getty.

ADVERTISEMENT

"That gets into her head of 'am I the third wheel?' Also, you know there's the whole boy thing, so 'why don't boys like me?' That sort of type of concern has been a worry for her lately," Lisa said.

"She's basically been just talking about general stuff; how she's feeling really and stuff going on at school. She's also been talking about things that we have talked about together and advice that I've been trying to give her.

"She's been feeding that into ChatGPT and then it's been saying, 'I think that advice is great.'"

Listen to the dangers of AI therapy bots on The Quicky. Post continues below.

However, in recent weeks, the mother has noticed her daughter has started to become withdrawn, and Lisa is now questioning her use of the chatbot.

"It is concerning, because obviously I've got no clue as to how good any of this feedback is — if it's going to lead to anything worse," Lisa said.

ADVERTISEMENT

"I've got no clue about that. I would rather that she's talking to me or somebody else, but yeah, I don't know. It's quite tricky really.

"She knows I'm there, available to talk to her any time, and I do try and chat to her.

"I guess the next step is me talking to somebody and working things out. I'm just trying to navigate this change that's happened. So I guess we'll just see how it goes. She's always been incredibly open and very happy to talk about everything, like literally everything, so it is a little bit of a change."

Blurred image of school playground. Lisa is unsure about intervening, whether it's normal teenage angst or something deeper. Image: Getty.

ADVERTISEMENT

A growing trend.

Lisa's experience is not isolated.

There are a growing number of teens and young adults, both in Australia and abroad, who are turning to chatbots for mental health support as therapy is expensive and at times tricky to access.

In April, Californian teen Adam Raine died by suicide, following what his family's lawyer said was "months of encouragement from ChatGPT."

The Raine family are reportedly now suing Open AI, the makers of ChatGPT. The company, which is on the cusp of a $500 billion valuation, has admitted its systems "fall short."

In a recent blog post, Open AI explained that while their models are trained to not provide self-harm instructions, they have noticed safety procedures can "degrade" over longer interactions.

"We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model's safety training may degrade," the statement read.

"For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards."

The company said they will work towards "strengthening safeguards in long conversations," as well as "strengthening protections for teens."

ADVERTISEMENT

AI 'reinforcing very dangerous behaviours.'

Dr Gavin Brown, clinical director and clinical psychologist at The Banyans Healthcare, said when people seek out chatbots in a therapy context it blurs a fundamental line.

"It's blurring the lines between therapist and companion," Brown told Mamamia. "One of the main purposes of therapy is to have our thinking and behaviour challenged. One of the main purposes of AI as it exists today is to be agreeable and to get along well with people.

"I would kind of argue that those purposes are diametrically opposed. If you've got a therapist who only ever agrees with everything that you say, as AI currently does, then that's not actually therapy."

A young person sitting on a sofa. Therapy is designed to, at times, challenge ways of thinking. Image: Getty.

ADVERTISEMENT

Brown explained AI is designed to engage as person for "as long as possible," which is where reinforcement can come into play.

"There is a real danger that AI can actually be reinforcing very dangerous behaviours," Brown said.

"There are some fairly disturbing examples where AI apparently has reinforced eating disorders. It's reinforced people not taking psychiatric medications and potentially, even in some cases, has actually reinforced suicidal ideation. In those contexts, obviously, it can be incredibly dangerous.

Brown also pointed out he has concerns over what type of information chatbots source.

"It could well be pulling that information from sources that are not reputable, that are not researched and that are not even factual," he said.

"Again, it's that kind of danger of particularly, young people, seeing AI as an expert. When in fact the AI itself may be relying on insufficient or incorrect information."

Although Open AI has said it's committed to putting guardrails in place, Brown said greater regulation is needed in the industry as an urgent priority.

"There certainly needs to be a lot of regulation around AI in the therapy space. Companies should not be able to put themselves in that space, or even really in a wellness space, unless there are very firm guard rails around how that AI operates," he said.

"Really thinking about some regulation on the responsibilities of AI companies and how they present AI is a really important step in regulation and unfortunately, in the digital space we tend to just be so far behind.

ADVERTISEMENT

"We're only just now catching up with social media, which has been around for decades. Now we've got this big new development, which can be useful but also can be very harmful, and we need to not be as slow as we have been with social media to realise what are the ramifications and how do we mitigate the risks as much as we possibly can."

For anyone who is tempted to use ChatGPT in the therapy space, Brown has a message — "check the source."

"Understand a little bit about maybe the particular AI program or whatever that you're using so you know how reputable and ethical it is," he said.

"It's probably most helpful when it's used in conjunction with a therapist, or in between therapist sessions; to help people maybe to understand some of the concepts that they might be learning or how to practice some of those concepts in their day-to-day life."

*Name changed to protect identity.

If you or anyone you know needs to speak with an expert, please contact your GP or in Australia, contact Lifeline (13 11 14), Kids Helpline (1800 55 1800) or Beyond Blue (1300 22 4636), all of which provide trained counsellors you can talk with 24/7.

If you have been bereaved or impacted by suicide loss at any stage in your life, StandBy is a free service you can access on 1300 727 247.

Feature image: Getty.

00:00 / ???