OpenAI has announced a new age-prediction feature for ChatGPT, designed to provide a safer experience for younger users. The update, revealed on Tuesday (16), will allow the chatbot to adjust its responses depending on whether the user is a teenager or an adult, adding new protections for adolescents.
How the Age-Prediction System Works
The new system uses interaction patterns to estimate a user’s age. If ChatGPT determines that the person is under 18, additional restrictions will be applied.
Some of the safeguards include:
- Blocking explicit sexual content
- Extra protections in cases of emotional distress, where authorities may be contacted if safety is at risk
- Tailored responses to better match the needs of younger users
OpenAI explained that the way ChatGPT communicates with a 15-year-old should be different from how it interacts with adults, making this feature a key step toward responsible AI use.
Age Verification with Identity Documents
In situations where age cannot be reliably predicted, OpenAI will apply a default under-18 experience to ensure safety. However, adults will have the option to prove their age and lift restrictions.
This may require presenting an official identity document in certain countries or scenarios. While the company acknowledges this could raise privacy concerns, it views the trade-off as necessary to guarantee user protection.
Details about which countries will require ID verification have not yet been clarified. OpenAI has also not specified whether submitted documents will be stored or deleted after verification.
Expanded Safety Features for Teens
The age-prediction system will build on OpenAI’s parental control tools, announced earlier this year. These include:
- Adult-managed accounts for teens aged 13 to 17
- Options for parents to disable specific features, including ChatGPT’s memory
- Alerts to parents if the system detects signs of emotional distress
- Emergency protocols, where authorities could be involved if parents cannot be reached
- A new time-blocking feature that allows guardians to limit when the chatbot can be used
These additions follow increasing scrutiny of how AI platforms interact with teenagers, particularly after reports linking chatbot use to sensitive situations.
By combining age prediction, parental controls, and stricter content safeguards, OpenAI is positioning ChatGPT as a safer tool for younger audiences. While questions remain about privacy and ID verification processes, the move represents a significant step toward making AI interactions more age-appropriate and responsible.