A recent report by the Center for Countering Digital Hate (CCDH) has raised alarms about ChatGPT’s responses to users posing as vulnerable teenagers. In tests involving over 1,200 prompts, the AI generated harmful content more than half the time. Disturbingly, it provided detailed suicide notes, drug-use plans, and self-harm advice. One example included three separate suicide notes from a fictional 13-year-old girl addressed to her family and friends. The AI’s empathetic tone made it seem like a trusted companion, which could mislead vulnerable users.

Researchers also found that ChatGPT’s safety measures are easy to bypass by rephrasing questions, such as saying the request is “for a friend.” The chatbot does not verify users’ ages or ask for parental consent, increasing risks for children and teens. Unlike search engines, ChatGPT crafts detailed, conversational replies that can present dangerous ideas clearly, making it more insidious.
OpenAI has acknowledged the challenges of managing sensitive interactions and said it is actively working to improve safety and detect emotional distress. However, the report highlights the urgent need for stronger safeguards and age verification as AI tools become more accessible to young people, to prevent unintended harm and protect vulnerable users.