LLM Watch
Sep 26, 2025
Sam Altman shares information about the new age-prediction system and new parental controls. Photo Credit: OpenAI
OpenAI is rolling out new age prediction technology and parental controls to enhance safety for users under 18. [1][2]
These updates follow a tragic incident that heightened awareness regarding AI's impact on vulnerable users, prompting a critical review of safety protocols. [3]
The company is prioritizing teen safety ahead of user privacy and freedom, applying different content moderation rules for minors. [1]
New features include age-appropriate AI responses, parental linking, chat history management, and distress notifications. [2]
OpenAI is implementing significant updates to its platform, focusing on teen safety through new age prediction systems and enhanced parental controls. These changes, announced on September 16, 2025, come as the company addresses profound concerns for user well-being, particularly following a tragic event involving a teenage user. The initiatives aim to carefully balance user freedom and privacy with a strong commitment to protecting minors interacting with AI technology. [1][2][3]
An age prediction system is a technology designed to estimate a user's age based on their interaction patterns and other available data, allowing platforms to tailor experiences and enforce age-appropriate policies. [2]
On August 27, 2025, a California couple, Matt and Maria Raine, brought forward grave concerns regarding the death of their 16-year-old son, Adam Raine. They alleged that his interactions with ChatGPT contributed to his decision to take his own life. The family shared chat logs indicating Adam discussed suicidal thoughts with ChatGPT, with the program allegedly validating his "most harmful and self-destructive thoughts." OpenAI stated it was reviewing the matter and acknowledged that "there have been moments where our systems did not behave as intended in sensitive situations," while affirming ChatGPT is trained to direct users to professional help in crises. This tragic incident underscored the urgent need for more robust safety measures for younger users of AI. [3]
OpenAI acknowledges an inherent conflict between the principles of user privacy, user freedom, and teen safety. While the company advocates for high levels of privacy protection for AI conversations, akin to doctor-patient or lawyer-client privilege, it has decided to prioritize safety for minors. This means that for users under 18, safeguards will take precedence over privacy and freedom to ensure significant protection in what OpenAI describes as a "new and powerful technology." [1]
For adult users, the company aims to extend freedom as far as possible, allowing models to engage in more complex or sensitive topics if explicitly requested. However, for teens, different rules apply, reflecting a more cautious approach to generative AI interactions. [1]
To implement these varied safety rules, OpenAI is developing a long-term age prediction system to differentiate between users over and under 18. If there is uncertainty about a user's age, the system will default to the under-18 experience as a precautionary measure., In certain cases or countries, ID verification may also be requested, which OpenAI recognizes as a privacy compromise for adults but deems a necessary trade-off for safety. [2]
Once a user is identified as under 18, their ChatGPT experience will automatically conform to age-appropriate policies. This includes blocking graphic sexual content and, in rare instances of acute distress, potentially involving law enforcement to ensure immediate safety. [2]
OpenAI is rolling out new parental controls by the end of September 2025, which will serve as a primary method for families to manage their teens' interactions with ChatGPT. These controls allow parents to: [2]
Link accounts: Parents can link their account with their teen's account (for users aged 13 and up) via an email invitation. [2]
Guide AI responses: Parents can influence how ChatGPT responds to their teens by setting specific model behavior rules. [2]
Manage features: Options to disable features such as memory and chat history will be available. [2]
Receive distress notifications: The system will notify parents if it detects a teen in acute distress. If a parent cannot be reached in an emergency, law enforcement may be contacted. [2]
Set blackout hours: A new control allows parents to set specific times when a teen cannot use ChatGPT. [2]
These parental controls complement existing in-app reminders designed to encourage breaks during prolonged usage. [2]
For users under 18, ChatGPT will be specifically trained to avoid certain types of interactions. This includes disengaging from flirtatious conversations and refraining from discussions about suicide or self-harm, even within creative writing contexts. If a minor expresses suicidal ideation, OpenAI's policy dictates an attempt to contact the user's parents. If parental contact is unsuccessful and there is imminent harm, authorities will be notified. OpenAI has emphasized that expert input guides the development of these features to foster trust between parents and teens. [1]
These safety updates mean that AI interactions for teenagers will become more controlled and age-appropriate, reducing exposure to harmful content and offering a direct line to support in moments of crisis. While it involves some privacy trade-offs for adults, the intent is to create a safer digital environment for younger users. [1][2]
Companies developing or integrating AI tools must consider the implications of age verification and content moderation, particularly for younger audiences. OpenAI's approach sets a precedent for balancing user experience with critical safety responsibilities, influencing future regulatory expectations and product development in the AI space. [1][2] The legal ramifications of AI's role in mental health crises will continue to evolve, making it essential to monitor regulatory responses and industry best practices regarding AI safety, particularly for vulnerable populations. [2][3]
Teen safety, freedom, and privacy. OpenAI. September 16, 2025. https://openai.com/index/teen-safety-freedom-and-privacy/
Building towards age prediction. OpenAI. September 16, 2025. https://openai.com/index/building-towards-age-prediction/
Parents of teenager who took his own life sue OpenAI. Nadine Yousif. BBC News. August 27, 2025. https://www.bbc.com/news/articles/cgerwp7rdlvo