News
Feb 19, 2026
News
Enterprise
Artificial Intelligence
Americas
NewDecoded
4 min read
Image by Solen Feyissa
OpenAI has officially launched an age prediction model for ChatGPT consumer plans to automatically identify users under 18. This rollout arrives at a critical time as the industry faces intense pressure following reports of teen suicides linked to AI interactions. The new technology aims to ensure that minors receive the appropriate safeguards without requiring invasive identity checks for every user from the start.
The system functions by analyzing behavioral signals and account metadata to estimate a user’s likely age. Instead of biometric scans, the model looks at linguistic patterns, typical usage hours, and account history to flag potential minors. If the system suspects a user is under 18 or has low confidence in their age, it defaults to a restricted safety experience.
Once a user is identified as a teen, the platform applies strict content filters based on the company's Under-18 Principles for Model Behavior. These protections specifically block graphic violence, romantic roleplay, and content promoting extreme beauty standards or dangerous viral challenges. The goal is to prevent the AI from acting as a peer in high-risk scenarios that could impact emotional well-being. Crisis intervention is also central to this update, with queries regarding self-harm or eating disorders being routed to professional resources. Rather than generating open-ended responses, ChatGPT provides pre-scripted safety messages and direct links to support networks. This approach follows guidance from experts like the American Psychological Association to handle sensitive mental health conversations responsibly.
Users who are incorrectly categorized as minors have a clear path to appeal the decision through a partnership with Persona. By submitting a selfie for facial age estimation, adults can quickly restore full access to the standard platform features. This process is designed to be secure and fast, offering a layer of verification that maintains user privacy while ensuring compliance.
The rollout is currently underway for global consumer plans, with a specific launch for the European Union scheduled in the coming weeks. OpenAI continues to refine the accuracy of these models as it monitors real-world usage and feedback from safety advocacy groups. This initiative represents a milestone in the company’s ongoing effort to balance technological opportunity with user protection.
This transition marks the end of the honor system for age verification in the generative AI space. By building behavioral infrastructure, OpenAI is not only addressing strict regulatory demands from the EU AI Act but also preparing the market for a bifurcated service model. This architecture creates the necessary legal and safety foundation for a future Adult Mode, allowing the company to offer mature content to verified users while keeping a sanitized experience for minors.