Why humanoid robots still can't survive in the real world
General-purpose robots remain rare, not because of a lack of equipment, but because we still cannot give machines the physical intuition that humans gain through experience.

BERLIN, GERMANY, SEPTEMBER 6: NEURA Robotics 4NE-1 Gen 3 humanoid robot is presented at IFA 2025 in Berlin, Germany on September 6, 2025.
Arthur Vidak/NurPhoto via Getty Images
IN Westworld, humanoid robots pour drinks and ride horses. IN Star wars, “droids“as ordinary as household appliances. This is the future I continue to expect when I watch the Internet's new favorite genre: robots dancingkickboxing or parkour. But then I look up from the phone, and there are no androids On Pavement.
By robots, I don't mean the millions of robots already installed in factories, or the tens of millions that consumers buy each year to vacuum carpets and other items. mow lawns. I'm talking about humanoid robots like C-3PO, Data, and Dolores Abernathy: general purpose humanoids.
What keeps them off the streets is a problem that robotics researchers have debated for decades. Building robots is easier than making them work in the real world. A robot can replicate TikTok moves on a flat surface, but the world is filled with uneven sidewalks, slippery stairs, and people in a hurry. To understand the difficulty, imagine walking through a dirty bedroom in the dark with a bowl of soup; every movement requires constant re-evaluation and recalibration.
About supporting science journalism
If you enjoyed this article, please consider supporting our award-winning journalism. subscription. By purchasing a subscription, you help ensure a future of influential stories about the discoveries and ideas shaping our world today.
Artificial intelligence language models for example, those that support ChatGPT do not offer a simple solution. They have no embodied knowledge. They are like people who have read every book on sailing while always staying on land: they can describe the wind and waves and quote famous sailors, but they have no physical understanding of how to steer a boat or control a sail.
“Some people think we can get data from videos of people, like YouTube, but looking at pictures of people doing things doesn't give you information about the actual detailed movements people are making, and going from 2D to 3D tends to be very difficult.” said roboticist Ken Goldberg in an August interview with the University of California, Berkeley news site.
To explain this gap, Meta's chief artificial intelligence scientist Yann LeCun noted that by age four, a child perceives far more visual information with their eyes alone than the amount of data on which the largest large language models (LLMs) are trained. “In 4 years, the child saw 50 times more data than the largest LLMs” he wrote on LinkedIn and X last year. Children learn from an ocean of embodied experience, and the massive data sets used to train artificial intelligence systems seem like a puddle in comparison. It's also the wrong idea: teaching an AI millions of poems and blogs won't make it any more capable of making your bed.
Roboticists have primarily focused on two approaches to closing this gap. The first one is a demonstration one. People control robotic arms remotely, often through virtual reality, so the systems can capture what “good behavior” looks like. This has allowed a number of companies to begin creating datasets to train future AIs.
The second approach is modeling. In virtual environments, artificial intelligence systems can perform tasks thousands of times faster than humans in the physical world. But the simulation faces a gap in reality. A simple task in a simulator may fail in reality, because the real world contains countless small details – friction, soft materials, lighting features.
This reality gap explains why the robotic parkour star can't wash your dishes. After the first World Humanoid Robot Games This year in Beijing, where robots competed in football and boxing, roboticist Benji Holson wrote about his disappointment. What people really want, he argued, is a robot that can do housework. He proposed a new Humanoid Olympics, in which robots would have to solve problems such as folding a T-shirt inside out, using a dog poop bag, and scraping peanut butter from their own hands.
It is easy to underestimate the complexity of these tasks. Imagine something as ordinary as rummaging through a gym bag full of clothes and finding one shirt. Every part of your hand and wrist recognizes texture, shape and resistance. You can recognize objects by touch and proprioception without having to remove and inspect everything.
A useful parallel is a type of robot that we've been teaching for many years, usually without calling it a robot: the self-driving car. For example, Tesla is collecting data from its cars to train the next generation of its self-driving AI. Across the industry, companies have had to collect massive amounts of driving data to achieve today's level of automation. But humanoids have a more difficult job than cars. Homes, open spaces and construction sites are much more varied than highways.
That's why engineers design many of today's robots to work in well-defined spaces—factories, warehouses, hospitals, and sidewalks—and task them with doing one job very well. The humanoid Digit from Agility Robotics carries storage bags. AI drawing robots work on assembly lines. UBTECH's Walker S2 can lift and carry loads on production lines and autonomously change the battery. Unitree Robotics' humanoid robots can walk and crouch to pick up and move objects, but they are still primarily used for research or demonstration. Although these robots are useful, they are still far from being a universal home assistant.
There is wide disagreement among those working in robotics about how quickly this gap will close. In March 2025 Nvidia CEO Jensen Huang told reporters: “This is not a five-year problem, it's a multi-year problem.” In September 2025, robotics Rodney Brooks wrote“The first profitable deployment of humanoid robots, even those with minimal dexterity, is more than a decade away.” He also warned of the dangers posed by robots due to lack of coordination and the risk of falling. “My advice to people is to stay within 10 feet of a full-size walking robot,” Brooks wrote.
What makes Main Street look like a sci-fi set right now is that most of the humanoids are still in the kindergartens we built for them, learning through cameramen or in simulators. What we don't know is how long their education will last. When humanoid robots will become commonplace, more dynamic than today's systems but much less flashy than the clips that go viral on TikTok. The future will still be there machines do the work for which they were prepared day after day, without drama.
It's time to stand up for science
If you liked this article, I would like to ask for your support. Scientific American has been a champion of science and industry for 180 years, and now may be the most critical moment in that two-century history.
I was Scientific American I have been a subscriber since I was 12, and it has helped shape my view of the world. science always educates and delights me, instills a sense of awe in front of our vast and beautiful universe. I hope it does the same for you.
If you subscribe to Scientific Americanyou help ensure our coverage focuses on meaningful research and discovery; that we have the resources to report on decisions that threaten laboratories across the US; and that we support both aspiring and working scientists at a time when the value of science itself too often goes unrecognized.
In return you receive important news, fascinating podcastsbrilliant infographics, newsletters you can't missmust-watch videos challenging gamesand the world's best scientific articles and reporting. You can even give someone a subscription.
There has never been a more important time for us to stand up and show why science matters. I hope you will support us in this mission.






