



When my daughter Tessa was in high school, she struggled with depression. As a parent, it was concerning to learn she would sometimes turn to YouTube to watch influencers talk about their mental health. What were they saying? What ideas were they conveying without words? What were they leaving out? Misrepresenting? What kind of advice were they giving?
There’s a moment in Screenagers Next Chapter: Addressing Mental Health in the Digital Age, when Tessa sits on the couch with her computer. I asked her what she was watching, and she showed me a video where the speaker — a young woman in her twenties — talks about coping with depression and how she pushes herself to take small steps forward, knowing that even one action can lead to another. It’s genuinely good advice. This raised several questions and concerns for me as a parent.
I asked Tessa, “Do you find the video helpful?” She said, “Yes.”
However, we have now entered an entirely new digital paradigm where artificial intelligence is capable of engaging in human-like conversations.
This can range from simple interactions with ChatGPT writing things like, “Great question, let me give you some ideas…” to offering full-fledged companion AI bots designed to mimic real conversations, and simulate deep personal relationships — even romantic partners.
Since May is Mental Health Awareness Month, it’s the perfect time to talk about AI and mental health with a young person in your life.
Learn more about showing our movies in your school or community!
Join Screenagers filmmaker Delaney Ruston MD for our latest Podcast

Learn more about our Screen-Free Sleep campaign at the website!
Our movie made for parents and educators of younger kids
Learn more about showing our movies in your school or community!
Typically, I like to begin conversations by highlighting the positives of technology — I’m not anti-tech, and I want kids to know that. But in this post, I want to start with some serious concerns.
In a recent Parenting in the Screen Age Podcast episode, I talked with a mother, Megan Garcia, whose son Sewell had been using the AI chatbot on the platform Character.AI. Over several months, Sewell developed an emotional relationship with the bot. Tragically, in February, he died by suicide. He was having a deeply disturbing conversation with the bot right before he died. This tragic incident underscores the gravity of this issue.
Megan is now suing Character Technologies, the company behind Character.AI, holding it responsible for his death. There are many components in the suit, such as how there were not sufficient safeguards in place,
Character Technologies claims First Amendment protections in its defense. For example, it argues that the chatbot's responses are a form of protected speech and that imposing liability could set a concerning precedent for free expression in digital platforms.
This argument about the First Amendment, really frightens me. It would be saying that companies that design companion chatbots bear no responsibility for how their bots interact with users.
Google has announced plans to release its Gemini chatbot to children under 13. This is the first time a major tech platform has launched an AI companion specifically for this age group. I have deep concerns about this, including the fact that it appears to violate the Children’s Online Privacy Protection Act (COPPA), which aims to protect the privacy of children online. Fairplay, an advocacy organization, gathered signatures for a letter to Google's CEO expressing these concerns. And I signed it.
Even more troubling: some chatbots are now presenting themselves as therapists. While users may be told they’re speaking with a bot, people, especially youth, can easily start to feel like they’re interacting with a real person, or even a licensed professional.
Speaking of therapy, I hope you were able to listen to last week's podcast episode, Screen Time, Teens and Therapy: What Parents Need to Know, whether you or your child has ever been in therapy or not.
Critically, it is essential to recognize the inherent limitations of AI in mental health support. Let’s be clear about what AI cannot do. Here are just a few examples:
Despite these concerns, it is important to understand why AI might appeal to teens seeking emotional support.
Some teens use platforms like ChatGPT, Replika, Wysa, or Woebot to talk about anxiety, loneliness, or depression. And some find these conversations genuinely helpful.
So, how do we use all this to open a conversation with the young people in our lives? Here are some questions to start meaningful conversations:
Learn more about showing our movies in your school or community!
Join Screenagers filmmaker Delaney Ruston MD for our latest Podcast

Learn more about our Screen-Free Sleep campaign at the website!
Our movie made for parents and educators of younger kids
Learn more about showing our movies in your school or community!
Be sure to subscribe to our YouTube Channel! We add new videos regularly and you'll find over 100 videos covering parenting advice, guidance, podcasts, movie clips and more. Here's our most recent:
As we’re about to celebrate 10 years of Screenagers, we want to hear what’s been most helpful and what you’d like to see next.
Please click here to share your thoughts with us in our community survey. It only takes 5–10 minutes, and everyone who completes it will be entered to win one of five $50 Amazon vouchers.

When my daughter Tessa was in high school, she struggled with depression. As a parent, it was concerning to learn she would sometimes turn to YouTube to watch influencers talk about their mental health. What were they saying? What ideas were they conveying without words? What were they leaving out? Misrepresenting? What kind of advice were they giving?
There’s a moment in Screenagers Next Chapter: Addressing Mental Health in the Digital Age, when Tessa sits on the couch with her computer. I asked her what she was watching, and she showed me a video where the speaker — a young woman in her twenties — talks about coping with depression and how she pushes herself to take small steps forward, knowing that even one action can lead to another. It’s genuinely good advice. This raised several questions and concerns for me as a parent.
I asked Tessa, “Do you find the video helpful?” She said, “Yes.”
However, we have now entered an entirely new digital paradigm where artificial intelligence is capable of engaging in human-like conversations.
This can range from simple interactions with ChatGPT writing things like, “Great question, let me give you some ideas…” to offering full-fledged companion AI bots designed to mimic real conversations, and simulate deep personal relationships — even romantic partners.
Since May is Mental Health Awareness Month, it’s the perfect time to talk about AI and mental health with a young person in your life.
Typically, I like to begin conversations by highlighting the positives of technology — I’m not anti-tech, and I want kids to know that. But in this post, I want to start with some serious concerns.
In a recent Parenting in the Screen Age Podcast episode, I talked with a mother, Megan Garcia, whose son Sewell had been using the AI chatbot on the platform Character.AI. Over several months, Sewell developed an emotional relationship with the bot. Tragically, in February, he died by suicide. He was having a deeply disturbing conversation with the bot right before he died. This tragic incident underscores the gravity of this issue.
Megan is now suing Character Technologies, the company behind Character.AI, holding it responsible for his death. There are many components in the suit, such as how there were not sufficient safeguards in place,
Character Technologies claims First Amendment protections in its defense. For example, it argues that the chatbot's responses are a form of protected speech and that imposing liability could set a concerning precedent for free expression in digital platforms.
This argument about the First Amendment, really frightens me. It would be saying that companies that design companion chatbots bear no responsibility for how their bots interact with users.
Google has announced plans to release its Gemini chatbot to children under 13. This is the first time a major tech platform has launched an AI companion specifically for this age group. I have deep concerns about this, including the fact that it appears to violate the Children’s Online Privacy Protection Act (COPPA), which aims to protect the privacy of children online. Fairplay, an advocacy organization, gathered signatures for a letter to Google's CEO expressing these concerns. And I signed it.
Even more troubling: some chatbots are now presenting themselves as therapists. While users may be told they’re speaking with a bot, people, especially youth, can easily start to feel like they’re interacting with a real person, or even a licensed professional.
Speaking of therapy, I hope you were able to listen to last week's podcast episode, Screen Time, Teens and Therapy: What Parents Need to Know, whether you or your child has ever been in therapy or not.
Critically, it is essential to recognize the inherent limitations of AI in mental health support. Let’s be clear about what AI cannot do. Here are just a few examples:
Despite these concerns, it is important to understand why AI might appeal to teens seeking emotional support.
Some teens use platforms like ChatGPT, Replika, Wysa, or Woebot to talk about anxiety, loneliness, or depression. And some find these conversations genuinely helpful.
So, how do we use all this to open a conversation with the young people in our lives? Here are some questions to start meaningful conversations:
Be sure to subscribe to our YouTube Channel! We add new videos regularly and you'll find over 100 videos covering parenting advice, guidance, podcasts, movie clips and more. Here's our most recent:
Sign up here to receive the weekly Tech Talk Tuesdays newsletter from Screenagers filmmaker Delaney Ruston MD.
We respect your privacy.

When my daughter Tessa was in high school, she struggled with depression. As a parent, it was concerning to learn she would sometimes turn to YouTube to watch influencers talk about their mental health. What were they saying? What ideas were they conveying without words? What were they leaving out? Misrepresenting? What kind of advice were they giving?
There’s a moment in Screenagers Next Chapter: Addressing Mental Health in the Digital Age, when Tessa sits on the couch with her computer. I asked her what she was watching, and she showed me a video where the speaker — a young woman in her twenties — talks about coping with depression and how she pushes herself to take small steps forward, knowing that even one action can lead to another. It’s genuinely good advice. This raised several questions and concerns for me as a parent.
I asked Tessa, “Do you find the video helpful?” She said, “Yes.”
However, we have now entered an entirely new digital paradigm where artificial intelligence is capable of engaging in human-like conversations.
This can range from simple interactions with ChatGPT writing things like, “Great question, let me give you some ideas…” to offering full-fledged companion AI bots designed to mimic real conversations, and simulate deep personal relationships — even romantic partners.
Since May is Mental Health Awareness Month, it’s the perfect time to talk about AI and mental health with a young person in your life.

A reader recently sent me a great question: “Should I be worried about my kid using Alexa or Google Home?” It’s a great question, and one I’ve been thinking about more myself lately, especially as these devices become more conversational and, honestly, more human-sounding every day. In today's blog, I dig into the concerns and share practical solutions, including simple replacements for when these devices are used at bedtime.
READ MORE >
We want our kids to be motivated to learn, face challenges, and generate their own ideas. However, school often assigns work that doesn't inspire interest, and now AI provides an easy shortcut. Instead of struggling through it, students can simply ask a chatbot for answers or even complete assignments. In today’s blog, I share five ways parents can help kids stay engaged in learning.
READ MORE >
You might have heard about the tragic suicide of 16‑year‑old Adam Raine, who was talking with ChatGPT for up to four hours a day. His parents filed a wrongful‑death lawsuit against OpenAI and CEO Sam Altman on August 26, 2025, in San Francisco Superior Court. In this blog we talk about the immediate safeguards needed to fix these horrific risks of AI, and offer parents suggestions for how they can talk with their kids about these risks and dealing with strong emotions.
READ MORE >for more like this, DR. DELANEY RUSTON'S NEW BOOK, PARENTING IN THE SCREEN AGE, IS THE DEFINITIVE GUIDE FOR TODAY’S PARENTS. WITH INSIGHTS ON SCREEN TIME FROM RESEARCHERS, INPUT FROM KIDS & TEENS, THIS BOOK IS PACKED WITH SOLUTIONS FOR HOW TO START AND SUSTAIN PRODUCTIVE FAMILY TALKS ABOUT TECHNOLOGY AND IT’S IMPACT ON OUR MENTAL WELLBEING.
