For too lengthy, AI has been trapped in Flatland, the two-dimensional world imagined by English schoolmaster Edwin Abbott Abbott. Whereas chatbots, picture mills, and AI-driven video instruments have dazzled us, they continue to be confined to the flat surfaces of our screens.
Now, NVIDIA is tearing down the partitions of Flatland, ushering within the period of “bodily AI” — a world the place synthetic intelligence can understand, perceive, and work together with the three-dimensional world round us.
“The following frontier of AI is bodily AI. Think about a big language mannequin, however as a substitute of processing textual content, it processes its environment,” mentioned Jensen Huang, the CEO of NVIDIA. “As a substitute of taking a query as a immediate, it takes a request. As a substitute of manufacturing textual content, it produces motion tokens
How is that this totally different from conventional robotics? Conventional robots are usually pre-programmed to carry out particular, repetitive duties in managed environments. They excel at automation however lack the adaptability and understanding to deal with surprising conditions or navigate complicated, dynamic environments.
Kimberly Powell, vice chairman of healthcare at NVIDIA, spoke to the potential in healthcare environments throughout her announcement on the JP Morgan Healthcare Convention:
“Each sensor, each affected person room, each hospital, will combine bodily AI,” she mentioned. “It’s a brand new idea, however the easy manner to consider bodily AI is that it understands the bodily world.”
Understanding is the crux of the matter. Whereas conventional AI and autonomous methods may function in a bodily area, they’ve traditionally lacked a holistic sense of the world past what they should perform rote duties.
Superior AI methods are steadily making good points because the efficiency of GPUs accelerates. In an episode of the “No Priors” podcast in November, Huang revealed that NVIDIA had enhanced its Hopper structure efficiency by an element of 5 over 12 months whereas sustaining software programming interface (API) compatibility throughout greater software program layers. It’s newest structure is Blackwell.
“An element of 5 enchancment in a single 12 months is not possible utilizing conventional computing approaches,” Huang famous. He defined that accelerated computing mixed with hardware-software co-design methodologies enabled NVIDIA to “invent all types of latest issues.”
Towards ‘synthetic robotics intelligence’
Huang additionally mentioned his perspective on synthetic normal intelligence (AGI), suggesting that not solely is AGI inside attain, however synthetic normal robotics is approaching technological feasibility as properly.
Powell echoed an analogous sentiment in her speak at JP Morgan. “The AI revolution shouldn’t be solely right here, it’s massively accelerating,” she mentioned.
Powell famous that NVIDIA’s efforts now embody all the things from superior robotics in manufacturing and healthcare to simulation instruments like Omniverse that generate photorealistic environments for coaching and testing.
In a parallel improvement, NVIDIA has launched new computational frameworks for autonomous methods improvement. The Cosmos World Basis Fashions (WFM) platform helps processing visible and bodily knowledge at scale, with frameworks designed for autonomous automobile and robotics purposes.

NVIDIA Cosmos has 4 key architectural parts: an autoregressive mannequin for sequential body prediction, a diffusionmodel for iterative video era, a video tokenizer for environment friendly compression, and a video processing pipeline for knowledge curation. These parts type an built-in platform for physics-aware world modeling and video era. | Supply: NVIDIA
Tokenizing actuality
At CES 2025 final week, Huang underscored simply how totally different “Bodily AI” will likely be in comparison with text-centric giant language fashions (LLMs): “What if, as a substitute of the immediate being a query, it’s a request—go over there and decide up that field and produce it again? And as a substitute of manufacturing textual content, it produces motion tokens? That could be a very wise factor for the way forward for robotics, and the know-how is true across the nook.”
In the identical No Priors podcast, Huang famous that the robust demand for multimodal LLMs may drive advances in robotics. “In the event you can generate a video of me choosing up a espresso cup, why can’t you immediate a robotic to do the identical?” he requested.
Huang additionally highlighted “brownfield” alternatives in robotics—the place no new infrastructure is required—citing self-driving automobiles and human-shaped robots as prime examples. “We constructed our world for automobiles and for people. These are essentially the most pure types of bodily AI.”
The structural underpinnings of Cosmos
NVIDIA’s Cosmos platform emphasizes physics-aware video modeling and sensor knowledge processing. It additionally introduces a framework for coaching and deploying WFMs, with parameter sizes starting from 4 to 14 billion, designed to course of multimodal inputs together with video, textual content, and sensor knowledge.
The system structure incorporates physics-aware video fashions educated on roughly 9,000 trillion tokens, drawn from 20 million hours of robotics and driving knowledge. The platform’s knowledge processing infrastructure leverages the NeMo Curator pipeline, which allows high-throughput video processing throughout distributed computing clusters.
This structure helps each autoregressive and diffusion fashions for producing physics-aware simulations, with benchmarks displaying as much as 14x enchancment in pose estimation accuracy in comparison with baseline video synthesis fashions. The system’s tokenizer implements an 8x compression ratio for visible knowledge whereas sustaining temporal consistency, important for real-time robotics purposes.
The imaginative and prescient for bodily AI
The event of world basis fashions (WFMs) represents a shift in how AI methods work together with the bodily world. The complexity of bodily modeling presents distinctive challenges that distinguish WFMs from standard language fashions.
“[The world model] has to grasp bodily dynamics, issues like gravity and friction and inertia. It has to grasp geometric and spatial relationships,” defined Huang. This complete understanding of physics ideas drives the structure of methods like Cosmos, which implements specialised neural networks for modeling bodily interactions.
The event methodology for bodily AI methods parallels that of LLMs, however with distinct operational necessities. Huang drew this connection explicitly: “Think about, whereas your giant language mannequin, you give it your context, your immediate on the left, and it generates tokens.”
The platform’s intensive coaching necessities align with Huang’s remark that “the scaling legislation says that the extra knowledge you have got, the coaching knowledge that you’ve, the bigger mannequin that you’ve, and the extra compute that you simply apply to it, subsequently the simpler, or the extra succesful your mannequin will develop into.”
This precept is exemplified in Cosmos’s coaching dataset of 9,000 trillion tokens, demonstrating the computational scale required for efficient bodily AI methods.

The picture illustrates NVIDIA’s Isaac GR00T know-how, displaying a human operator utilizing a VR headset to reveal actions which are mirrored by a humanoid robotic in a simulated surroundings. The demonstration highlights teleoperator-based artificial movement era for coaching next-generation robotic methods. | Supply: NVIDIA
Future implications
Bodily AI has the potential to remodel greater than conventional customers of robotics. In parallel with advances in bodily AI, AI brokers are additionally rapidly increasing their ability units. Huang described such brokers as “the brand new digital workforce working for and with us.”
Whether or not it’s in manufacturing, healthcare, logistics, or on a regular basis client know-how, these clever brokers can relieve people of repetitive duties, function constantly, and adapt to quickly altering situations. In his phrases, “It is rather, very clear AI brokers might be the following robotics trade, and more likely to be a multi-trillion greenback alternative.”
As Huang put it, we’re approaching a time when AI will “be with you” continually, seamlessly built-in into our lives. He pointed to Meta’s sensible glasses as an early instance, envisioning a future the place we are able to merely gesture or use our voice to work together with our AI companions and entry details about the world round us.
This shift towards intuitive, always-on AI assistants has profound implications for a way we be taught, work, and navigate the environment, in accordance with Huang.
“Intelligence, after all, is essentially the most beneficial asset that we now have, and it may be utilized to resolve a variety of very difficult issues,” he mentioned.
As we glance to a future crammed with steady AI brokers, immersive augmented actuality, and trillion-dollar alternatives in robotics, the age of “Flatland AI” is poised to attract to an in depth, and the true world is ready to develop into AI’s biggest canvas.
Editor’s word: This text was syndicated from The Robotic Report sibling website R&D World.
Register at the moment to avoid wasting 40% on convention passes!