They say there’s no such thing as a dumb question. As someone who asks dumb questions for a living, I can tell you that’s a really stupid thing to say.
But the best questions are often the ones where someone smart explains something from the ground up to a total novice (read: me). The beauty of NVIDIA: there are a lot of smart people upon whom I can inflict my very dumbest questions.
Turns out I’m not alone. Month in and month out, tens of thousands of readers ask search engines these very questions. And they get connected to the answers through our blog.
What are they? Smart question. Here are five of our most popular in 2019.
This post is over a decade old, but the answer — thanks to the emergence of deep-learning driven AI, supercomputing, and self-driving cars — is more relevant than ever. That’s why we’ve updated our original post earlier this year, and why more readers are seeing this post than ever.
Visualize the fields of AI, machine learning and deep learning as concentric circles. AI — the idea that came first — is the largest circle. Then comes machine learning, which blossomed later. And finally deep learning — which is driving today’s AI explosion — fitting inside both. Click on the link, above, for more.
This is one of the key questions in AI right now, which is why this post has become one of our most popular. Click on the link for a plain English answer to these questions, and a walk through the kinds of datasets and problems that lend themselves to each kind of learning.
This is another question that’s drawn more readers over time. Training, in short, is the process of running data through a neural network to teach it a task. That’s taught computers to do things that, just a decade ago, most believed could only be done by humans. Inference, by contrast, is the process of putting that trained network to work, in everything from hyperscale data centers to autonomous machines.
Used to be if you wanted to see ray tracing, you went to the movies. If you wanted to see rasterization, you fired up a video game. Ray tracing models the way light moves around the real world beautifully but it’s computationally intensive. Rasterization, by contrast, can be done in a hurry. NVIDIA’s latest Turing architecture GPUs blur these lines, with hardware acceleration for real-time ray tracing, making truly cinematic games possible.
Have a question you want answered? Send us your idea.
ABB Robotics Taps NVIDIA Omniverse to Deliver Industrial‑Grade Physical AI at Scale
ABB Robotics RobotStudio, enabled by NVIDIA Omniverse libraries, closes the sim‑to‑real gap with 99% accuracy, as Foxconn and global manufacturers begin pilots ahead of its 2026 release.
By integrating NVIDIA Omniverse libraries directly into its RobotStudio programming and simulation suite, ABB Robotics will now deliver physically accurate simulation capabilities in its platform, dramatically cutting engineering time, reducing deployment costs by up to 40% and accelerating time to market by as much as 50%.
The new product — called RobotStudio HyperReality — will be available in the second half of 2026 and is already drawing strong interest from ABB Robotic’s global customer base. Early pilots include Foxconn, the world’s largest electronics manufacturer, and Workr, a U.S.‑based robotic workforce company bringing advanced automation to small and medium-size manufacturers.
The partnership marks a major milestone for the industrial sector, which has long sought a reliable way to bring AI-powered intelligence to robots, bridging the sim‑to‑real gap that separates virtual robot training from real‑world performance.
“Combining RobotStudio with the physically accurate simulation power of NVIDIA Omniverse libraries, we have closed technology’s long-standing ‘sim-to-real’ gap – a huge milestone to deploying physical AI with industrial-grade precision, for real-world customer applications,” said Marc Segura, president of ABB Robotics.
A Breakthrough in Physical AI for Industry
ABB’s integration of NVIDIA Omniverse libraries into RobotStudio brings physically accurate, photorealistic simulation directly into the tool used by more than 60,000 robotics engineers worldwide. The result is a unified workflow where manufacturers can design, program, test and validate entire automation cells before deploying a single robot.
RobotStudio HyperReality exports a fully parameterized robot station — robots, sensors, lighting, kinematics and parts — as a USD file into NVIDIA Omniverse. There, ABB Robotics’ virtual controller runs the same firmware as the physical robot, enabling 99% correlation between simulation and real‑world behavior. Synthetic images generated in Omniverse feed directly into AI training pipelines, allowing vision models to be trained entirely in simulation.
This combination of physics‑rich simulation, synthetic data generation and ABB’s Absolute Accuracy technology — which reduces positioning errors from 8-15 mm to around 0.5 mm — delivers unmatched precision for industrial‑grade applications.
Closing the Sim‑to‑Real Gap
For decades, manufacturers have struggled with the limitations of simulation: lighting that doesn’t match reality, materials that behave differently on the factory floor and models that fail when exposed to real‑world variation. ABB Robotics integration of NVIDIA Omniverse directly addresses these challenges.
“The industrial sector needs high‑fidelity simulation to bridge the gap between virtual training and real‑world deployment of AI‑driven robotics at scale,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Integrating NVIDIA Omniverse libraries into RobotStudio brings advanced simulation and accelerated computing to ABB’s virtual controller technology, accelerating how thousands of manufacturers bring complex products to market.”
With RobotStudio HyperReality, manufacturers can design and validate production lines virtually, cutting setup and commissioning times by up to 80% and eliminating the need for physical prototypes. The result is faster product ramps, lower cost and greater reliability — especially for industries like consumer electronics where precision is paramount.
ABB Robotics is also exploring the integration of the NVIDIA Jetson edge AI platform into its Omnicore controller to enable real‑time inference across its robot portfolio.
Real‑World Pilots: Foxconn and Workr
Several customers are already testing RobotStudio HyperReality ahead of its full release.
Foxconn is piloting the technology in consumer electronics assembly, where delicate metal components and frequent product variations make automation challenging. Using HyperReality, Foxconn trains robots virtually with synthetic data, achieving unparalleled accuracy when deployed on the production line. The company expects to reduce setup time and eliminate costly physical testing.
Workr, a California‑based robotic workforce company, is integrating their own physical AI platform, WorkrCore, with ABB industrial robots trained with synthetic data generated using NVIDIA Omniverse libraries to deploy advanced automation to small and medium-size manufacturers. At NVIDIA GTC 2026 in San Jose, Workr plans to demonstrate AI‑powered robotic systems that can onboard new parts in minutes and deploy without programming expertise.
Don’t miss NVIDIA founder and CEO Jensen Huang’s GTC keynote at the SAP Center on March 16 at 11:00 a.m. PT, where he’ll share the latest breakthroughs in AI and accelerated computing.
NVIDIA and Global Industrial Software Leaders Partner With India’s Largest Manufacturers to Drive AI Boom
India’s largest manufacturers are teaming with global industrial software leaders Cadence, Siemens and Synopsys to build AI factories for design and manufacturing accelerated by NVIDIA AI infrastructure, CUDA-X and Omniverse libraries.
India is entering a new age of industrialization, as AI transforms how the world designs, builds and runs physical products and systems. The country is investing $134 billion dollars in new manufacturing capacity across construction, automotive, renewable energy and robotics, creating both a massive challenge and opportunity to build software-defined factories from day one.
At the center of this transformation are applications accelerated by NVIDIA CUDA-X and NVIDIA Omniverse libraries, which connect data from design to operations and bring physical AI into factories, warehouses and infrastructure.
India’s largest manufacturers are teaming with global industrial software leaders Cadence, Siemens and Synopsys to advance the nation’s AI boom using applications accelerated by CUDA-X and Omniverse libraries.
India’s Manufacturing Leaders Modernize Factories With Siemens and NVIDIA
To scale India’s growth, manufacturers are using Siemens industrial software integrated with NVIDIA CUDA-X and Omniverse libraries to design, build and operate next-generation, software-defined factories.
Reliance New Energy, the clean energy arm of Reliance industries, is expanding its collaboration with NVIDIA and Siemens by combining Siemens’ digital twin technology with NVIDIA Omniverse libraries for faster, more precise simulation and plant design for its next-generation gigafactories.
Addverb Technologies, a leading Indian company providing robots and innovative warehouse automation solutions, is using Siemens’ Technomatix portfolio, NVIDIA Omniverse libraries and NVIDIA Cosmos world foundation models to create digital twins of its factories and train its quadruped and wheeled humanoid robots in simulation.
Hero MotoCorp is utilizing Siemens Xcelerator and NVIDIA infrastructure to accelerate the product development lifecycle by enhancing its capabilities in computer-aided engineering, numerical virtual verification and validation.
Partners Advance Design and Engineering With NVIDIA-Accelerated Software From Synopsys and Cadence
Leading enterprises are integrating Synopsys and Cadence’s electronic design automation tools, powered by NVIDIA AI infrastructure and libraries, to enable rapid design iteration and operational intelligence across the energy, automotive and electronics sectors.
Electrical equipment and home appliances leader Havells India Limited is using Synopsys’ Ansys Fluent to accelerate simulation powered by NVIDIA CUDA-X. Havells has obtained 6x faster fluid dynamic simulations, enabling exploration of more design options to optimize airflow and energy efficiencies, and achieve faster time to market.
Larsen & Toubro Semiconductor’s application of Cadence Spectre X, accelerated by CUDA-X libraries, on NVIDIA GPUs shortens design iterations of next-generation AI chips.
India’s Technology Leaders Advance Industrial Automation With Physical AI
India’s IT and business consulting sector has grown into a global powerhouse, projected to reach over $350 billion this year, serving as a primary engine for transforming the world’s largest industries.
Tata Consultancy Services (TCS), a global leader in IT services, is investing in large-scale AI infrastructure to deliver enterprise solutions at scale. By harnessing the NVIDIA Metropolis platform, the NVIDIA Blueprint for video search and summarization and digital twins built on Omniverse libraries, TCS is setting safety and precision benchmarks at Tata Motors, converting standard camera feeds into intelligent sensors for automated quality checks and real-time safety compliance.
TCS is also deploying physical AI applications, including autonomous safety and quality inspections via quadruped robots, to minimize risk across complex manufacturing environments.
Wipro PARI, a leader in industrial automation, is integrating NVIDIA AI infrastructure, Omniverse libraries and the NVIDIA Isaac robotics development platform to deliver solutions for its consumer and automotive customers. This includes real-time simulation and validation of robotic workflows, as well as virtual stress-testing of operations before physical deployment.
Tata Consulting Engineers is launching its Cognitive Twin platform, built on NVIDIA Omniverse, to create real-time industrial simulations that link physical assets with digital intelligence across manufacturing, energy and infrastructure. The platform supports both capital project planning and operational optimization through early-stage simulation and AI-enabled decision-making. Pilot projects are underway with National High Speed Rail Corporation Limited, Torrent Power and Power Grid Corporation of India Limited.
Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advancements in OpenUSD and NVIDIA Omniverse.
Open source has become essential for driving innovation in robotics and autonomy. By providing access to critical infrastructure — from simulation frameworks to AI models — NVIDIA is enabling collaborative development that accelerates the path to safer, more capable autonomous systems.
At CES earlier this month, NVIDIA introduced a new suite of open physical AI models and frameworks to accelerate the development of humanoids, autonomous vehicles and other physical AI embodiments. These tools span the entire robotics development lifecycle — from high-fidelity world simulation and synthetic data generation to cloud-native orchestration and edge deployment — giving developers a modular toolkit to build autonomous systems that can reason, learn and act in the real world.
OpenUSD provides the common framework that standardizes how 3D data is shared across these physical AI tools, enabling developers to build accurate digital twins and reuse them seamlessly from simulation to deployment. NVIDIA Omniverse libraries, built on OpenUSD, serve as the source of ground‑truth simulation that feeds the entire stack.
From Labs to the Show Floor
At CES 2026, developers brought the NVIDIA physical AI stack out of the lab and onto the show floor, debuting machines ranging from heavy equipment and factory assistants to social and service robots.
The stack taps into NVIDIA Cosmos world models; NVIDIA Isaac technologies, including the new Isaac Lab-Arena open source framework for policy evaluation; the NVIDIA Alpamayo open portfolio of AI models, simulation frameworks and physical AI datasets for autonomous vehicles; and the NVIDIA OSMO framework to orchestrate training across compute environments.
Caterpillar’s Cat AI Assistant, powered by NVIDIA Nemotron open models for agentic AI and running on the NVIDIA Jetson Thor edge AI module, brings natural language interaction directly into the cab of heavy vehicles. Operators can ask “Hey Cat”-style questions and get step‑by‑step guidance, as well as adjust safety parameters by voice.
Behind the scenes, Caterpillar uses Omniverse libraries to build factory and job‑site digital twins that can help simulate layouts, traffic patterns and multi‑machine workflows. These insights are fed back into equipment and fleets before changes are deployed to job sites, making AI‑assisted operations safer and more efficient.
LEM Surgical showcased its Dynamis Robotic Surgical System, which is FDA-cleared and in routine clinical use for spinal procedures. The next-generation system uses NVIDIA Jetson AGX Thor for compute, NVIDIA Holoscan for real-time sensor processing and NVIDIA Isaac for Healthcare to train its autonomous arms.
LEM Surgical also uses NVIDIA Cosmos Transfer — an open, fully customizable world model that enables physically based synthetic data generation — to generate synthetic training data and the NVIDIA Isaac Sim framework for digital twin simulation. Designed as a dual-arm humanoid surgical robot for hard-tissue surgery, the Dynamis system mimics human surgeon dexterity and enables complex spinal procedures with enhanced precision, alleviating strenuous physical demands on surgeons and surgical assistants.
LEM Surgical showcase.
NEURA Robotics is building cognitive robots on a full NVIDIA stack, using Isaac Sim and Isaac Lab to train its 4NE1 humanoid and MiPA service robots in OpenUSD‑based digital twins before deployment in domestic settings and workplaces. The company used NVIDIA Isaac GR00T‑Mimic to post‑train the Isaac GR00T foundation model for its platforms.
In addition, NEURA Robotics is collaborating with SAP and NVIDIA to integrate SAP’s Joule agents with its robots, using the Mega NVIDIA Omniverse Blueprint to simulate and refine robot behavior in complex, realistic operational scenarios before those agents and behaviors are deployed into the company’s Neuraverse ecosystem, as well as in real‑world fleets.
AgiBot uses NVIDIA Cosmos Predict 2 as the world‑modeling backbone for its Genie Envisioner (GE-Sim) platform — allowing the platform to generate action‑conditioned videos grounded in strong visual and physical priors. Combining this data with Isaac Sim and Isaac Lab, as well as post‑training on AgiBot’s own data, lets policies developed in Genie Envisioner transfer more reliably to Genie2 humanoids and compact Jetson Thor-powered tabletop robots.
Intbot is using the NVIDIA Cosmos Reason 2 open model to give its social robots a “sixth sense” for the real world — using the model’s reasoning capabilities to identify simple social cues and safety context that go beyond simple scripted tasks. In its Cosmos Cookbook recipe, Intbot demonstrates how reasoning vision language models can aid robots in deciding when to speak and how to more naturally interact with humans.
How Robotics Developers Are Using New Toolkits and Frameworks
NVIDIA recently introduced Agile, an Isaac Lab-based engine for humanoid loco‑manipulation that packages a full, sim‑to‑real‑verified workflow for training robust reinforcement learning policies on platforms like the Unitree G1 and LimX Dynamics TRON.
Robotics developers can use Agile’s built‑in task configurations, Markov Decision Process mathematical models for decision-making, training utilities and deterministic evaluation tools to tune policies. Developers can then stress‑test these policies in Isaac Lab and transfer locomotion and whole‑body behaviors to real-world robots more reliably and efficiently.
Hugging Face and NVIDIA are bringing together their robotics communities by integrating NVIDIA Isaac GR00T N models and simulation frameworks into the LeRobot ecosystem. Developers can now access Isaac GR00T N1.6 models and Isaac Lab‑Arena directly within LeRobot to streamline policy training and evaluation.
Plus, Hugging Face’s open‑source Reachy 2 humanoid is now fully interoperable with NVIDIA Jetson Thor, enabling the direct deployment of advanced vision language action (VLA) models for robust real‑world performance.
ROBOTIS, a leading developer of smart servos, industrial actuators, manipulators, open-source humanoid platforms and educational robotic kits, built an open source sim-to-real pipeline using NVIDIA Isaac technologies. The workflow starts with high‑fidelity data generation in Isaac Sim, scales up training sets using GR00T‑Mimic for augmentation and then fine‑tunes a VLA‑based Isaac GR00T N model that deploys directly to hardware — accelerating the transition from simulation to robust real‑world tasks.
Get Plugged In
Learn more about OpenUSD and robotics development by exploring these resources:
Readthis technical blog to learn how to develop generalist humanoid capabilities with NVIDIA Isaac and GR00T N1.6.
Readthis technical blog to learn how to evaluate generalist robot policies in simulation using NVIDIA Isaac Lab – Arena.
Participate in the Cosmos Cookoff, a hands-on physical AI challenge where developers use Cosmos Reason to power robotics, autonomous systems and vision AI workflows.