China plans rules to protect children and tackle suicide risks

Osmond ChiaBusiness reporter

Getty Images Stock photo of a little girl with pigtails looking intently at a smartphone. She's wearing a blue patterned top and sitting in a modern, warmly lit room.Getty Images

China has proposed strict new rules for artificial intelligence (AI) to ensure children are protected and ban chatbots from giving advice that could lead to self-harm or violence.

Under the planned rules, developers will also have to ensure that their artificial intelligence models do not create content that promotes gambling.

The announcement comes after a surge in the number of chatbots being launched in China and around the world.

Once finalized, the rules will apply to artificial intelligence products and services in China, a major step in regulating the fast-growing technology that has come under scrutiny this year over security concerns.

Draft ruleswhich were released over the weekend by the Cyberspace Administration of China (CAC), include measures to protect children. These include requiring artificial intelligence companies to offer personalized settings, set time limits on use and obtain consent from caregivers before providing emotional communication services.

Chatbot operators should instruct a person to handle any conversation related to suicide or self-harm and immediately notify the user's guardian or emergency contact, the administration said.

AI providers must ensure that their services do not create or distribute “content that jeopardizes national security, harms national honor and interests.” [or] undermines national unity,” the statement said.

The CAC said it encourages the adoption of AI, for example to promote local culture and create communication tools for older people, as long as the technology is safe and reliable. He also called for feedback from the public.

Chinese artificial intelligence firm DeepSeek made headlines around the world this year after it topped the app download charts.

This month, Z.ai and Minimax, two Chinese startups that together have tens of millions of users, announced plans to go public.

The technology has quickly gained a huge following, with some using it to communication or therapy.

The impact of artificial intelligence on human behavior has come under intense scrutiny in recent months.

Sam Altman, chief executive of OpenAI, the maker of ChatGPT, said this year that how chatbots respond to self-harm conversations is one of the company's biggest challenges.

Family in California in August sued OpenAI over the death of their 16-year-old sonclaiming that ChatGPT incited him to commit suicide. The suit is the first to accuse OpenAI of wrongful death.

This month, the company announced the appointment of a “chief readiness officer” who will be responsible for protecting against the risks posed by AI models to human mental health and cybersecurity.

The successful candidate will be responsible for monitoring AI risks that could harm humans. Mr. Altman said: “It’s going to be intense work and you’ll be jumping into the deep end almost immediately.”

If you are suffering from stress or despair and need support, you can talk to a healthcare professional or organization that offers support. Details of the assistance available in many countries can be found on the Befrienders Worldwide website: www.befrienders.org.

In the UK, a list of organizations that can help is available at: bbc.co.uk/actionline. Readers in the US and Canada can call the suicide helpline 988 or visit his website.

Leave a Comment