China's plans to create humanoid AI could set the tone for global AI rules
Beijing is set to tighten China's rules on humanoid artificial intelligence, with a strong focus on user safety and public values.

China pushes ahead with plans to regulate human activities artificial intelligenceincluding forcing artificial intelligence companies to ensure that users know they are interacting with a bot online.
According to the proposal published on Saturday China Cyberspace Regulatorpeople would need to be informed if they were using the AI-powered service – both when they log in and again every two hours. Under the proposal, humanoid artificial intelligence systems such as chatbots and agents would also be required to uphold “core socialist values” and have protective guardrails to maintain national security.
In addition, AI companies will have to undergo security checks and inform local government agencies if they deploy any new humanoid AI tools. And chatbots that attempted to engage users on an emotional level will be prohibited from creating any content that encourages suicide or self-harm or could be considered harmful to mental health. They will also be prohibited from creating content related to gambling or obscene or violent content.
About supporting science journalism
If you enjoyed this article, please consider supporting our award-winning journalism. subscription. By purchasing a subscription, you help ensure a future of influential stories about the discoveries and ideas shaping our world today.
Mounting housing made of research shows that AI chatbots are incredibly persuasive, and there are growing concerns about the technology's addictiveness and its ability to lure people into harmful actions.
China's plans may change: the draft proposal is open for comments until January 25, 2026. But the efforts underscore Beijing's desire to develop the national artificial intelligence industry ahead of the USA, including due to shaping global AI regulation. The proposal also contrasts with Washington's clumsy approach to regulating technology. In January of this year, President Donald Trump written off Biden-era security proposal to regulate the artificial intelligence industry. And earlier this month Trump target State-level rules designed to govern AI, threatening legal action against states with laws that the federal government believes are hampering AI development.
It's time to stand up for science
If you liked this article, I would like to ask for your support. Scientific American has been a champion of science and industry for 180 years, and now may be the most critical moment in that two-century history.
I was Scientific American I have been a subscriber since I was 12, and it has helped shape my view of the world. science always educates and delights me, instills a sense of awe in front of our vast and beautiful universe. I hope it does the same for you.
If you subscribe to Scientific Americanyou help ensure our coverage focuses on meaningful research and discovery; that we have the resources to report on decisions that threaten laboratories across the US; and that we support both aspiring and working scientists at a time when the value of science itself too often goes unrecognized.
In return you receive important news, fascinating podcastsbrilliant infographics, newsletters you can't missvideos worth watching challenging gamesand the world's best scientific articles and reporting. You can even give someone a subscription.
There has never been a more important time for us to stand up and show why science matters. I hope you will support us in this mission.






