NVIDIA Research is advancing methods that combine robotic simulation, optimization and AI to enable more generalizable and adaptable robot behavior.
At this year’s Robotics: Science and Systems (RSS) conference, taking place June 21-25 in Los Angeles, members of the global robotics community are convening to explore breakthroughs in autonomy, perception and physical intelligence.
NVIDIA researchers are presenting cutting-edge work spanning simulation-to-real transfer, agile humanoid robot control, GPU-accelerated planning and foundation models for open-world reasoning.
“The RSS conference stands as a pillar for both foundational research and real-world innovation in robotics,” said Fabio Ramos, principal research scientist at NVIDIA. “This year, NVIDIA’s research — from enabling humanoid robots to learn manipulation skills and agile, full-body motions through real-world data, to advancing reasoning and perception — has brought the robotics community closer to achieving real-time, adaptable and intelligent autonomy in complex environments.”
Below are selected NVIDIA Research papers showcasing advancements in robot learning and control at RSS:
- ASAP: This approach built by Carnegie Mellon University and NVIDIA enables humanoid robots to perform agile, full-body motions in the real world by learning corrections to simulated physics from real-world data.
- Sim-and-Real Co-Training: A simple co-training approach taps into simulation and real-world data to boost manipulation performance without domain randomization or fine-tuning.
- Differentiable GPU-Parallelized Task and Motion Planning: Researchers from the Massachusetts Institute of Technology and NVIDIA developed this method to bring real-time, high-dimensional robot planning closer to reality with a fully differentiable task- and motion-planning pipeline optimized for parallel execution on GPUs.
- Open-World Task and Motion Planning via Vision-Language Model Inferred Constraints: This algorithm infers task constraints from natural language and scene context, enabling robots to plan in complex, unstructured environments.
- Fail2Progress: An approach that turns robotic failures into useful learning signals via probabilistic refinement using Stein variational inference, a computational statistics method that can be used to handle complex data and models.
Robotics workshops at RSS featuring NVIDIA speakers include:
- Robot Planning in the Era of Foundation Models: Investigates how foundation models can be integrated with symbolic and neuro-symbolic planning to advance robot task and motion planning.
- Generative Modeling Meets HRI: Explores how generative models can enhance human-robot interaction through better intent prediction, communication and coadaptation.
- Fast Motion Planning and Control in the Era of Parallelism: Discusses how modern parallel computing architectures are transforming real-time motion planning and control.
- Unifying Visual SLAM: From Fragmented Datasets to Scalable, Real-World Solutions: Seeks to advance simultaneous localization and mapping capabilities to process large-scale data and build more scalable, real-world solutions for robotics and computer vision.
Explore the latest work from NVIDIA Research and check out the Robotics Research and Development Digest (R²D²), which gives developers deeper insight into the latest physical AI and robotics breakthroughs.
Robot in featured image courtesy of Unitree.