The generative AI platform says it’s rolling out a new set of protections to keep under 18s ‘safe’, including parental control features.

Key Takeaways
- OpenAI says it’s developing a ‘long-term system’ to understand whether someone is over or under 18 .
- It is building a ChatGPT for users under 18 years old, with ‘age-appropriate policies’.
- Some of those policies will include blocking graphic sexual content, not engaging in flirtatious talk if asked or discussions around self-harm and suicide.
- While it hasn’t developed those controls yet, one feature, parental control, will be available by the end of the month.
Key background
OpenAI has revealed that over the past two weeks, it’s been expanding conversations with experts and advocacy groups with a view to improve safety for its users, and is now looking to develop a ChatGPT version suitable for teens.
It will build an algorithm to determine whether a user is over or under 18, and tailor its ChatGPT system accordingly with ‘age-appropriate policies’, including blocking graphic sexual content, not engaging in flirtatious discussions or discussions around suicide and self-harm, even creatively. In some cases, it may even involve law enforcement.
While these protections aren’t available yet – and we don’t know when they will be – OpenAI says it’s launching a parental control system that will be available by the end of the month. This will allow parents to link their account with their teen’s (minimum age 13) and manage which features to disable. They’ll also receive notifications when their teen is a ‘moment of acute distress’. There’ll even be blackout hours where a teen cannot use the program.
Crucial quote(s)
“It is extremely important to us, and to society, that the right to privacy in the use of AI is protected… We are advocating for this with policymakers,” OpenAI co-founder and chief executive officer, Sam Altman, said.
“We prioritise safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection… We realise that these principles are in conflict and not everyone will agree with how we are resolving that conflict. These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.”
Big number
72% of US teens have tried an AI companion at least once, according to a study by Common Sense Media found. And in this context, a ‘companion’ is an AI chatbot designed for users to have personal conversations with, not an AI assistant. And more than half (52%) of teens (those aged 13-17) say they’re regular users. In fact, 13% chat with them daily, and 21% chat a few times a week.
Tangent
Worldwide spending on AI is forecast to total nearly $1.5 trillion in 2025, according to Gartner, a business and technology insights company. By 2026, that number is expected to surpass $2 trillion, with the majority of money spent on integrating generative AI into products like smartphones and PC.
“The forecast assumes continued investment in AI infrastructure expansion, as major hyperscalers continue to increase investments in data centres with AI-optimised hardware and GPUs to scale their services,” said John-David Lovelock, Distinguished VP Analyst at Gartner.
“The AI investment landscape is also expanding beyond traditional US tech giants, including Chinese companies and new AI cloud providers. Furthermore, venture capital investment in AI providers is providing additional tailwinds for AI spending.”
Look back on the week that was with hand-picked articles from Australia and around the world. Sign up to the Forbes Australia newsletter here or become a member here.