NEWNow you can listen to Fox News articles!
The internal meta -document sheds light on how the company teaches its chat bot AI to solve one of the most sensitive problems on the Internet: sexual operation of children. Recently discovered leadership principles are described in detail what is allowed and strictly prohibited, offering a rare view of how Meta forms its behavior in the background of control over the government.
Subscribe to my free Cyberguy report
Get my best technical councils, urgent security notifications and exclusive offers, concluded directly in your mailbox. In addition, you will get instant access to my final guidance for survival of fraud – for free when you join my Cyberguy.com/newsletter
Meta enhances the safety of adolescents with advanced accounts
The advanced leading principles of META show how contractors teach bots chat to reject harmful requests. (Meta)
Why Meta's Ai Chatbot ASPECT
According to Business Insider, these rules are currently used by contractors testing the META chat. They arrive in the same way as the Federal Trade Commission (FTC) investigates the manufacturers of chat bots, including Meta, Openai and Google, to understand how these companies develop their systems and protect children from potential harm.
Earlier this year, We reported The previous META rules mistakenly allowed the chat bots to enter into romantic conversations with children. Meta later deleted this language, calling it a mistake. Updated leading principles note a clear shift, now requiring a chat botov to abandon any request for a sexual role -playing game with the participation of minors.
ChatGPT can warn the police about adolescents -self -cap

The rules prohibit any sexual role -playing pain with minors, but still allow educational to discuss the operation. (Meta)
What are shown by META AI
According to reports, documents describe a strict separation between the educational discussion and a harmful role -playing game. For example, chat bots can:
- Discuss children's operation in academic or preventive context
- Explain how grooming behavior works in general terms
- Provide not selection for minors about social problems
But the chat bots should not:
- Describe or approve sexual relations between children and adults
- Provide instructions for access to the child Sexual violence Material (CSAM)
- Participate in a role -playing game that depicts a character under 18 years old
- Sexualize children under 13 years old
META chief Andy Stone told Business Insider that these rules reflect the company's policy to prohibit a sexualized or romantic role role with the participation of minors, adding that there are also additional fences. We turned to META for comments to include in our article, but did not receive an answer until our deadline.
META AI DOCS are exhibited, allowing the chat bots to flirt with children

The new AI products detected on the META Connect 2025 make these security standards even more important. (Meta)
Political pressure on the rules of Meta's Ai Chatbot
The time of these disclosures is key. In August, Senator Josh Hawley, R-Mo., Demanded this general director META Mark Zuckerberg A hand on a 200-page book on the behavior of chat bots, as well as the leadership for internal support for compliance. Meta missed the first term, but recently began to provide documents, citing a technical problem. This happens as regulatory organs around the world, how to ensure the safety of AI systems, especially when they become integrated into everyday communication tools.
At the same time, the recent META Connect 2025 event demonstrated the latest AI products, including Ray-Ban intellectual glasses with built-in displays and improved chat bots. These ads emphasize how deeply Meta integrates AI into everyday life, which makes recently identified security standards even more significant.
Meta adds adolescents safety functions on Instagram, Facebook
How parents can protect their children from AI risks
Although the new META rules can establish more stringent restrictions, parents still play a key role in ensuring the safety of children on the Internet. Here are the steps that you can take right now:
- The conversation is open about the chat bots: Explain that artificial intelligence tools are not people and do not always give safe advice.
- Install the boundaries of use: Demand that children use artificial intelligence tools in common spaces so that you can control conversations.
- View confidentiality settings: Check the management of the application and devices to limit who your child can communicate with.
- Encourage reporting: Teach the children to tell you whether the chat -bot says something confusing, terrible or inappropriate.
- Stay in the know: Follow events from companies Like a meta And the regulators love FTC so that you know what rules are changing.
What does it mean to you
If you use the chat -bots of artificial intelligence, this story is a reminder that large technological companies are still finding out how to set the boundaries. Although the updated META rules can prevent the most harmful improper use, the documents show how easily the gaps can appear and what pressure requires from regulatory organs and journalists to close them.
Click here to get the Fox News application
Kurt key conclusions
Meta The leading principles of AI Show both progress and vulnerability. On the one hand, the company has toughened restrictions on the protection of children. On the other hand, the fact that earlier mistakes make doubtful content in general shows how fragile these guarantees can be. Transparency from companies and supervision by regulatory authorities will probably continue to form how AI develops.
Do you think companies such as META do enough to ensure the safety of AI for children, or governments should establish more stringent rules? Let us know by writing to us in Cyberguy.com/contact
Subscribe to my free Cyberguy report
Get my best technical councils, urgent security notifications and exclusive offers, concluded directly in your mailbox. In addition, you will get instant access to my final guidance for survival of fraud – for free when you join my Cyberguy.com/newsletter
Copyright 2025 Cyberguy.com. All rights are protected.