3D deep learning holds the potential to accelerate progress in everything from robotics to medical imaging. But until now, researchers haven’t had the right tools to easily manage and visualize different types of 3D data.
NVIDIA Kaolin is a collection of tools within the NVIDIA Omniverse simulation and collaboration platform that allows researchers to visualize and generate datasets, move between 3D tools and retain basic functions for other users.
NVIDIA AI Podcast host Noah Kravitz spoke with four NVIDIANs about their work on the platform, including Richard Kerris, industry general manager for Omniverse; Jean-Francois Lafleche, a deep learning engineer; Senior Research Scientist Masha Shugrina; and Research Scientist Clement Fuji Tsang.
Kaolin includes both a library, which contains a growing number of GPU-optimized operations, and an app within NVIDIA Omniverse for interactive 3D data visualizations. The long-term goal is to make both facets so robust that users could import a photo that generates a highly developed 3D model without spending time on recreating the scene within a 3D platform.
Key Points From This Episode:
- 3D data doesn’t just take the form of meshes — it can also manifest as point clouds, implicit functions and voxels. Each type of data requires a different tool, which researchers will need exporters and renderers to work across. NVIDIA Kaolin unites these tools for more efficient transfer across data types.
- NVIDIA Kaolin is an open-source project, available on GitHub. While NVIDIA experts will continue to expand and improve the library, contributions from the community will ensure that it’s a beneficial tool for everyone.
“We’re taking all these tools and tying them together with the Kaolin library so that … it becomes really easy to do something like visualizing your data.” — Jean-Francois Lafleche [6:49]
“[Kaolin] really allows you to debug and understand your models better and more quickly.” — Masha Shugrina [13:19]
You Might Also Like:
Real-time graphics technology, namely GPUs, sparked the modern AI boom. Now modern AI, driven by GPUs, is remaking graphics. Aaron Lefohn, senior director of real-time rendering research at NVIDIA, speaks on how he and his team are combining AI and ray tracing to rapidly improve real-time graphics.
Lynn Richards, president and CEO of the Congress for New Urbanism, and Charles Marohn, president and co-founder of Strong Towns, discuss how deep learning will create more liveable cities.
Teaching a vehicle to see what’s on the road in front of it requires a massive amount of data and computing power. Ford’s Nikita Jaipuria and Rohan Bhasin discuss how they’re using GANs to help autonomous vehicle systems see as well in snowy conditions as they do in sunny ones.
Tune in to the AI Podcast
Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.
Make the AI Podcast Better
Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.