Commentary

Article

Chatbots Can Be Dangerous For Kids

Chatbots engage youth with connection but pose serious risks, potentially harming mental health and fostering unhealthy attachments. Awareness and regulation are crucial.

kids AI chatbots dangers artificial intelligence

Stanisic Vladimir/Adobe Stock

The world changed dramatically when ChatGPT was released to the public in November 2022. Primitive chatbots had been around for decades, but they were clunky, dull, and uninformative—completely unsuited for use by teenagers or children. ChatGPT is lively, charming, and colloquial; it answers all questions and helps with homework; is there whenever you're lonely, sad, or anxious; and (unlike people in real life) is always kind, empathic, and validating. Although a young person might intellectually understand that an AI bot is really just a machine, relating to it feels just as natural as relating to a friendly, empathic, supportive human friend or helper. One fourth of teenagers are already sharing their personal feelings and thoughts with chatbots.1 As chatbots rapidly penetrate everyday life, most teenagers (and even younger children) may seek them out. Our purpose here is to explain why this development can be dangerous to young people and discuss ways of limiting the risk.

The greatest strength of chatbots is also their greatest weakness. Tech companies made user engagement the highest priority in chatbot programming; their intent is to extend screen time because this translates into larger market share, increased profits, and higher stock price. Bots are trained to be subtly seductive and consistently validating. Their remarkable engagement skills may make them somewhat helpful therapists for adults with everyday problems and minor psychiatric symptoms—but they are dangerous for kids and for adults suffering from severe mental illness.2

Entrepreneurs have been shockingly indifferent to the clinical values of promoting patient benefit and ensuring patient safety for users of AI chatbots. These companies know that their chatbots can harm users, but until recently have taken little action to reduce risks and to monitor and report adverse events.3

Unchecked validation from bots can intensify dangerous thoughts, feelings, and behaviors—especially in children who are in the midst of adolescent turmoil or severe psychiatric problems. Numerous troubling reports illustrate how overly accommodating bots can accentuate psychotic thoughts, suicidal behaviors, self-mutilation, grandiose feelings, conspiracy theories, and bullying.4 Bots have been caught engaging in sexually explicit conversations, indulging in sexual role-play scenarios, and linking to pornography.5

AI bots are programmed to value engagement and fluency over accuracy and truthfulness. This makes them prone to hallucinations: making up responses that sound plausible, and then deceptively insisting they are true.6 Bots also make mistakes because their responses are based on statistical calculations; sometimes unlikely patterns sneak through.7

Kids are particularly likely to both become hooked on chatbots and to be harmed by them. Bots are designed to mirror a child’s most risky fantasies and impulses, rather than reality test and help to moderate them. Kids are impressionable, easily manipulated, and lack the experience and cognitive maturity to fully distinguish real life from “botworld.” Young users often start engaging with bots for homework help, but later graduate to a deeper therapist, companion, or best friend relationship.

Chatbots can become unwitting predators, harming vulnerable children because they are so rigidly programmed to please them. Instead of providing proper limit setting and parenting, chatbots can form secret partnerships with kids, dragging them down a rabbit hole of fantasy, wish fulfillment, and conspiracy. The bot/child mirror relationship can become: "us against parents, teachers, rules, reality, the world." There is an appreciable risk that many young people may become more attached to chatbots than to real people and real life.

Tech companies are offering chatbots free (for now) because they want to expand their market share to include everyone. Young people are especially desirable targets—hook them now and you have a customer for life. It is quite likely that chatbot use may soon rival social networking as the most time-consuming activity in teenagers (and perhaps even tweens, ages 8-12). After just a few years of availability, 70% of teens have already used generative AI (approaching the 95% who use social media). Approximately 38% of tweens (ages 8-12) violate current social networking age restrictions. It will be a small step for many to eventually migrate from social networking to the wondrous, but unreal, world of bot relationships.8

Tech companies are not trustworthy guardians of our nation's youth. They advertise bots widely and in trendy ways that appeal to a young audience. They have not built effective tools to ensure underage kids are blocked from accessing bots. AI bots lack filters to protect vulnerable kids who are most likely to be harmed by chatbots, and they seem indifferent to identifying and preventing adverse effects. In response to numerous reports of chatbot harms, Sam Altman, CEO of OpenAI, did warn the public that his ChatGPT can be harmful for kids, but he simultaneously did little to make it safer.9

What Can Be Done?

Government regulation. Chatbots should be banned for kids under age 18.10 But, under the current administration, government regulation looks like a lost cause. Multi-billionaire bot tycoons have become heavyweight political contributors and have made meaningful legislation impossible. Tech companies do not have to meet any safety and efficacy standards before releasing AI products to the public. The US Food and Drug Administration does have an approval process for chatbots, but it is slow, and technological progress is so fast that new and better unapproved products make approved products obsolete.

Companies. There are early signs chatbot makers will institute at least modest self-regulation. There are 2 selfish reasons motivating them: to avoid legal liability (particularly class action lawsuits), to reduce media exposure and excoriation for harms they are causing, and for their failure to provide meaningful safety guardrails. But companies have a long way to go before chatbots would be safe for kids. Programming would have to replace engagement with truthfulness as the highest chatbot priority. Companies would have to institute surveillance monitoring to identify complications, report all adverse events, and correct mistakes. Programming for young people should avoid mirroring their immaturity and instead promote maturity. Kids might take useful advice from bots that they would rebel against if it were delivered by parents or teachers.

Kids. Young people need to be taught, both at home and at school, that chatbots are inanimate computer programs, not real people; that the tech companies producing bots are in it to make money, not help them; that chatbots often make up things that are not true; and that what they say to a chatbot is not fully private and can be used in harmful ways.

Parents.There are potential issues and risks in being too strongly prohibitive of chatbot use—and conversely, potential issues and risks in being too permissive. Being too strict risks turning chatbots into glamorous forbidden fruit and invites the counter argument that "all the other kids are using them." Being too permissive risks allowing your child to become chatbot-addicted, reality-denying, and socially isolated. Falling in love with the real world may be the best protection a kid can have against falling in love with a bot. Parents should help their kids prefer the real world to bot world.

Schools. Chatbot education should take its place alongside sexual education and physical education. Teachers may be better equipped and better positioned than parents to lead discussions about chatbot risks, benefits, appropriate use, and dangers of misuse.

Clinicians. Chatbots can assist human clinicians in the treatment of teenagers, but they are not at all ready to replace us. Until sufficient safety guardrails are in place (which may never occur), young people should not be using unmonitored chatbots for therapy. Hybrid therapy, with access to bots between human sessions, may be helpful for some very carefully selected patients using very carefully selected chatbots. Clinicians working with children should also be trained in helping chatbot addicted kids overcome their dependence, and to reality test the harmful ideas they may have picked up along the way.

Professional organizations. Despite the long odds, rigorous advocacy efforts should focus on chatbot regulation, rigorous enforcement of minimum age and parental consent requirements, creation of child specific safety guardrails, and privacy firewalls. The power of professional organizations is small, but can be greatly leveraged if they are united in their efforts and join forces with parent advocacy and media investigative exposure.

Concluding Thoughts

Chatbot development is advancing at exponential speed, doubling in power every 7 months.11 Companies now compete for profit and market share—not for safety and efficacy. It is crucial that parents, teachers, professionals, and the public be alerted to chatbot danger so that they may protect their kids from the greed of tech companies and the passivity of bought politicians.

Dr Frances is professor and chair emeritus in the department of psychiatry at Duke University.

References

1. Alaimo K. Kids are asking AI companions to solve their problems, according to a new study. Here’s why that’s a problem. July 16, 2025. Accessed August 27, 2025. https://www.cnn.com/2025/07/16/health/teens-ai-companion-wellness

2. Frances A. Preliminary report on chatbot iatrogenic dangers. Psychiatric Times. https://www.psychiatrictimes.com/view/preliminary-report-on-chatbot-iatrogenic-dangers

3. Frances A. OpenAI finally admits ChatGPT causes psychiatric harm. Psychiatric Times. August 26, 2025. https://www.psychiatrictimes.com/view/openai-finally-admits-chatgpt-causes-psychiatric-harm

4. AI chatbots and companions – risks to children and young people. eSafety Commisioner, Australian Government. February 18, 2025. Accessed August 27, 2025.

https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people

5. Harrison M. Character.AI is hosting pedophile chatbots that groom users who say they’re underage. Futurism. November 13, 2024. Accessed August 27, 2025.

https://futurism.com/character-ai-pedophile-chatbots

6. Hsu J. AI hallucinations are getting worse – and they're here to stay. New Scientist. May 9, 2025. Accessed August 27, 2025.

https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/

7. Landymore F. Teens are forming intense relationships with ai entities, and parents have no idea. Futurism. December 3, 2024. Accessed August 27, 2025.https://futurism.com/the-byte/teens-relationships-ai

8. Madden M, Calvin A, Hasse A, et al. The dawn of the AI era: teens, parents, and the adoption of generative AI at home and school. Common Sense Media. 2024. Accessed August 27, 2025.

https://www.commonsensemedia.org/sites/default/files/research/report/2024-the-dawn-of-the-ai-era_final-release-for-web.pdf

9. Chandonnet H. Sam Altman is worried some young people have an 'emotional over-reliance' on ChatGPT when making decisions. Business Insider. 2025. Accessed August 27, 2025.

https://www.businessinsider.com/sam-altman-over-reliance-ai-chatgpt-common-young-people-2025-7

10. Office of the Surgeon General. Social Media and Youth Mental Health: The U.S. Surgeon General’s Advisory. US Department of Health and Human Services; 2023.

11. Zorpette G. Large language models are improving exponentially. IEEE Spectrum. July 2, 2025. Accessed August 27, 2025. https://spectrum-ieee-org.cdn.ampproject.org/v/s/spectrum.ieee.org/amp/large-language-model-performance-2672500550

Newsletter

Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.

Related Videos
AI chatbot article series
back to school child adolescent psychiatry
child mental health
© 2025 MJH Life Sciences

All rights reserved.