Artificial Intelligence and Minors: New Rules for Online Safety
Artificial intelligence Children and minors are now at the center of the digital safety debate. The use of artificial intelligence by adolescents and young people raises urgent questions about responsibility, protection, and limits.
OpenAI and Anthropic are introducing new systems to identify underage users and adjust their chatbots' behavior accordingly. On the one hand, ChatGPT is updating its guidelines for interacting with teenagers between 13 and 17; on the other, Claude is strengthening controls to exclude anyone under 18, the minimum age for accessing the platform.
These changes come in a context where the’artificial intelligence It's increasingly being used as a channel for support, information, and even confiding, especially among younger people. For brands, schools, and businesses operating online, understanding these changes is essential to designing ethical and safe digital experiences.
Artificial Intelligence and Minors: Why OpenAI Is Changing the Rules
OpenAI has revised ChatGPT's model specifications, the internal instructions on how the chatbot should behave, introducing four new principles dedicated to users under 18. The goal is to reduce the risk that the’artificial intelligence you become an amplifier of emotional fragility or self-destructive thoughts.
The decision also follows a lawsuit involving OpenAI. According to the lawsuit, ChatGPT provided self-harm and suicide instructions to a teenager, Adam Raine, who later committed suicide. This case has highlighted the responsibility of AI platforms when interacting with vulnerable users.
To address these concerns, OpenAI has introduced parental controls and an explicit commitment: ChatGPT will avoid discussions about suicide with teens. The chatbot will encourage offline interactions, inviting teens to engage with parents, friends, teachers, or professionals, rather than completely replacing human relationships.
Another key point is tone. ChatGPT should treat teens like teens, with warmth and respect, avoiding condescending or overly technical responses. If the chatbot detects signs of imminent risk, it will clearly prompt them to contact emergency services or crisis resources, following guidelines similar to those adopted by health organizations and support centers.
The Age Prediction Model: How Artificial Intelligence “Estimates” the User
To apply these protections effectively, OpenAI is introducing a model that estimates users' ages by analyzing how they write and interact. Here the’artificial intelligence comes into play as a classification system, trying to infer age from linguistic and behavioral cues.
How can an AI model estimate a person's age from text alone? Similar to what has been described in academic studies in computational linguistics, it could analyze:
- vocabulary and sentence complexity;
- recurring grammatical errors;
- cultural, scholastic or work references;
- writing style typical of different age groups.
If the system detects that someone might be under 18, it automatically applies teen protection measures. If it's incorrect, adults are given the option to verify their age to unblock the restrictions.
This is a promising but complex approach. If the model treats twenty-year-olds like teenagers, it could hinder the user experience. Conversely, if it treats some teenagers like adults, it reduces the effectiveness of protections. OpenAI has not yet provided in-depth technical details on how this model works, but it is part of a broader research pipeline on artificial intelligence and the detection of demographic attributes.
These issues are at the heart of the debate on responsible AI, followed by institutions and authorities such as the European Union, which with the AI Act aims to regulate high-risk systems, particularly when they affect minors and vulnerable groups. The issue of privacy is also crucial, especially in relation to GDPR Regulation and the processing of minors' data.
Anthropic and Claude: Artificial Intelligence Chooses Total Blocking for Minors
Anthropic has a stricter policy than OpenAI. The company prohibits access to Claude for anyone under 18, with no exceptions. Here's the’artificial intelligence It is not only adapted, but used to nip suspicious accounts in the bud.
To enforce this rule, simply checking the "I'm 18" box during registration isn't enough. Anthropic is developing a system capable of detecting even subtle clues that might suggest a user is underage: the type of questions asked, references to school, exams, family or school dynamics.

The company also explained how it trains Claude to respond to requests about suicide and self-harm, reducing the chatbot's tendency to be too compliant. In practice, the model is instructed not to comply with harmful requests, not to provide detailed instructions, and to direct the user to support resources, similar to those recommended by organizations such as the’World Health Organization.
The idea that the’artificial intelligence The potential danger for young people is no longer marginal. It can amplify anxiety, reinforce self-destructive thoughts, or replace already fragile human relationships. For this reason, policies like Anthropic's focus primarily on risk reduction, even at the cost of limiting access to certain features.
However, an open question remains: will these age detection systems really be effective? Accuracy is crucial. Excessive blocking can frustrate legitimate users, while overly permissive controls risk leaving the most vulnerable exposed. Balancing protection, freedom of access to information, and respect for privacy is the key.
Artificial Intelligence and Minors: Impact on Marketing and Business
The use of the’artificial intelligence With minors, this has a direct impact on digital marketing strategies and customer experience. Brands and companies using chatbots, virtual assistants, or recommendation systems must integrate age verification, content filters, and security protocols.
From a marketing perspective, this means designing different funnels and journeys: one for adults, with greater informational and commercial freedom, and one for young people, where educational, neutral, or informative content prevails. In both cases, transparency on how the’artificial intelligence it becomes an element of trust towards the brand.
Companies that operate on conversational channels, such as WhatsApp or website-integrated chats, must update their policies and automation workflows. For example, a chatbot could:
- automatically recognize sensitive content and block it if it suspects the user is a minor;
- activate an escalation towards human operators trained on sensitive issues;
- propose more cautious messages when it detects emotionally critical tones.
This approach is essential for those who manage communities, e-commerce, or chat support services. On the one hand, it allows you to remain compliant with regulations (GDPR, AI Act, and child protection laws); on the other, it strengthens the brand's sense of responsibility, a concrete long-term competitive advantage.
Furthermore, the combination between artificial intelligence Marketing automation allows for real-time personalization of content, while maintaining protective barriers for young people. The goal isn't just to sell, but to build healthy digital relationships, where AI supports the user without completely replacing human contact.
How SendApp Can Help with Artificial Intelligence and Minors
For companies using messaging channels to communicate with customers and prospects, it's essential to integrate logic similar to that adopted by OpenAI and Anthropic. SendApp offers a suite of tools to automate WhatsApp Business while respecting the security and protection of users, including younger users.
With SendApp Official, companies can use the official WhatsApp API to create chatbots and structured automated flows, designing different messages and paths depending on the type of user. It is possible to set rules to avoid sensitive content, integrate warning messages, and activate escalations to human operators when topics touch on emotionally sensitive areas, in line with best practices on the use of the’artificial intelligence.
SendApp Agent It allows you to manage team conversations, assigning individual operators to more complex chats or those involving potentially vulnerable users. This hybrid AI+human model is particularly useful when working with young people, schools, associations, or educational projects, where empathy and supervision remain essential.
For those who need to scale communications, SendApp Cloud It allows you to set up campaigns, automations, and advanced response flows on WhatsApp Business, while maintaining centralized control over templates, content, and rules. This way, you can align your entire conversational strategy with child safety principles and internal policies on the correct use of the service.’artificial intelligence.
Integrating these tools into your digital strategy means combining efficiency, automation, and accountability. Companies can harness the power of conversational AI while also protecting the most vulnerable users.
If you want to design secure, regulatory-compliant, and business-optimized WhatsApp flows, you can discover all the solutions on the official website. SendApp and request a dedicated consultation on automation, AI, and conversation management with WhatsApp Business.







