Robots Don’t Get Smart in Labs

Robots Don’t Get Smart in Labs

Analytics India Magazine (Sanjana Gupta)

Robotics companies often talk about intelligence as something built in controlled environments, trained in simulation, and perfected before deployment. That logic borrows heavily from the rise of digital AI, where models improved behind screens before reaching users. But in the physical world, intelligence does not mature that way, and several founders now argue it never will.

Robots get smarter by working, said Josh Gruenstein, chief executive of Tutor Intelligence, in a conversation with AIM on the sidelines of AWS re:Invent 2025, held during the first week of December in Las Vegas. He added that learning happens on factory floors and warehouse aisles, where machines repeat tasks, fail, recover, and try again. For Physical AI, deployment is not the final step. It is the training loop.

“I’ve been fascinated by robots for as long as I can remember,” Gruenstein said, adding that he has been building them for fun since he was a kid. “I’ve had robot birthday parties every year since I was eight years old.”

That childhood curiosity shaped into formal training. As a student at MIT, Gruenstein worked on embodied robot artificial intelligence, researching how machines perceive and act in the physical world. But the lab environment left him dissatisfied. “It’s really convenient to be able to train a robot AI policy in simulation,” he said. “But, that eventually needs to go to work in the world.” 

The gap between academic success and real-world performance became difficult to ignore. His frustration with that gap led him to move away from lab-only learning. “The core question is, how do you bridge that gap?” he said. Tutor Intelligence emerged as a response to that frustration. 

Gruenstein and his co-founder, Alon Kosowsky-Sachs, asked whether intelligence could be built outside controlled environments, at scale, and under real constraints. “Could we deploy robots into the world,” he said, “where those robots could gather experience, and that experience would directly enable deployment of more robots?” 

When Simulation Stops

Simulation remains central to robotics research, but Gruenstein sees limits in how far it can go. “The idea of having to cross a modality in order to solve a learning problem is very unique to us,” he said, referring to the jump from simulated environments to real factories. In theory, better simulators could narrow that gap. In practice, too many variables remain.

He described the problem as a mismatch between data distributions. “When you go out to inference in the real world, you have a guarantee that inference is in distribution,” he said, only if the training data closely matches reality. In the simulation, that alignment breaks down. “You really can’t check everything off,” he added, pointing to gaps in photorealism, contact modelling, hardware behaviour, and task diversity.

Rather than a perfect simulation, Tutor chose a different path. 

“Our model is [that] robots go out into the world, and they collect data exactly as they’re doing inference,” Gruenstein said. “Literally, they reach the data from one minute before.” By learning inside the same environment they operate in, robots avoid many of the distribution gaps that stall lab-trained systems.

This approach reframes factories and warehouses. They are no longer just deployment sites. They become continuous data sources. Every pallet stacked, or box moved, feeds back into the system. 

Over time, that data improves not just one robot, but the entire fleet.

The need for this kind of data was echoed at a panel at the conference. NVIDIA’s head of robotics and edge computing ecosystem, Amit Goel, noted that physical systems generate far more information than text or images alone. The data generated from the physical world in order of magnitude is much more than the existing text data, he said. That data has to be collected somewhere, and simulation alone cannot supply it at scale.

Same Robots But Customised Solutions

Tutor’s strategy ties learning directly to economics. Gruenstein described factories as “a liquid labour market.” They already spend fixed budgets on repetitive work. Robots that match those costs can slot into existing operations. “You can provide a solution that can sort of deliver productivity in that facility at those same parameters,” he said.

That makes intelligence a by-product of work. Gruenstein said, “Customers will pay us to generate more economic value on more robots, to collect more data, and so on.” Instead of funding long training cycles, customers effectively subsidise learning through daily operations.

This model also reshapes how generalisation works in robotics. Gruenstein rejected the idea that one robot must learn everything on its own. Alongside that, there is shared experience across sites. “What is the general experience that comes across all of my factories?” he said, indicating that it improves the fleet as a whole.

The result is distributed intelligence. 

Individual robots adapt to local conditions. Fleet-level systems capture patterns that repeat across customers. Over time, both layers improve. This definition of generalisation looks less like a single brain and more like an organisation learning from experience.

Gruenstein sees this stitching together of data sources as the next challenge. “We have simulation data. We have real-world robot data,” he said. “What is the recipe by which all these things come together?” Unlike language models, robotics does not yet have settled answers.

He points out that robotics lacks the data volumes that powered recent AI leaps. “Nobody has collected the volumes of physical data that are commensurate anywhere near the volumes of data that have been used to train frontier models,” he said.

Closing that gap requires time and scale, not a single launch.

That reality also shapes where robots appear first. Gruenstein noted that 92% of US manufacturers still operate without robots. Many change products daily, which breaks traditional automation. “None of that automation works,” he said, describing factories that rely on human flexibility instead of fixed systems. Deployment-first robots fit those environments because they learn as conditions change.

The post Robots Don’t Get Smart in Labs appeared first on Analytics India Magazine.

Generated by RSStT. The copyright belongs to the original author.

Source

Report Page