A Textured Approach: NVIDIA Research Shows How Gen AI Helps Create and Edit Photorealistic Materials

Researchers showcase AI techniques to rapidly create and edit materials, accelerating 3D workflows.
by Isha Salian

NVIDIA researchers won the Best in Show award at SIGGRAPH’s Real-Time Live event with a demo of generative AI models that can help artists rapidly iterate on 3D scenes.

The research demo showcases how artists can use text or image prompts to generate custom textured materials — such as fabric, wood and stone — faster and with finer creative control. These capabilities will be coming to NVIDIA Picasso, allowing enterprises, software creators and service providers to create custom generative AI models for materials, developed using their own fully licensed data.

This set of AI models will facilitate iterative creating and editing of materials, enabling companies to offer new tools that’ll help artists rapidly refine a 3D object’s appearance until they achieve the desired result.

In the demo, NVIDIA researchers experiment with a living-room scene, like an interior designer assisted by AI might do in any 3D rendering application. In this case, researchers use NVIDIA Omniverse USD Composer — a reference application for scene assembly and composition using Universal Scene Description, known as OpenUSD — to add a brick-textured wall, to create and modify fabric choices for the sofa and throw pillows, and to incorporate an abstract animal design in a specific area of the wall.

Generative AI Enables Iterative Design 

The Real-Time Live demo combines several optimized AI models — a palette of tools that developers using Picasso will be able to customize and integrate into creative applications for artists.

Once integrated into creative applications, these features will allow artists to enter a brief text prompt to generate materials — such as a brick or a mosaic pattern — that are tileable, meaning they can be seamlessly replicated over a surface of any size. Or, they can import a reference image, such as a swatch of flannel fabric, and apply it to any object in the virtual scene.

An AI editing tool lets artists modify a specific area of the material they’re working on, such as the center of a coffee table texture.

The AI-generated materials support physics-based rendering, responding realistically to changes in the scene’s lighting. They include normal, roughness and ambient occlusion maps — features that are critical to creating and fine-tuning materials for photorealistic 3D scenes.

When accelerated on NVIDIA Tensor Core GPUs, materials can be generated in near real time, and can be upscaled in the background, achieving up to 4K resolution while creators continue to refine other parts of the scene.

Across creative industries — including architecture, game development and interior design — these capabilities could help artists quickly explore ideas and experiment with different aesthetic styles to create multiple versions of a scene.

A game developer, for example, could use these generative AI features to speed up the process of designing an open world environment or creating a character’s wardrobe. An architect could experiment with different styles of building facades in various lighting environments.

Build Generative AI Services With NVIDIA Picasso 

These capabilities for physics-based material generation will be made available in NVIDIA Picasso, a cloud-based foundry that allows companies to build, optimize and fine-tune their own generative AI foundational models for visual content.

Picasso enables content providers to develop generative AI tools and services trained on fully licensed, rights-reserved data. It’s part of NVIDIA AI Foundations, a set of model-making services that advance generative AI across text, visual content and biology.

At today’s SIGGRAPH keynote, NVIDIA founder and CEO Jensen Huang also announced a new Picasso feature to generate photorealistic 360 HDRi environment maps to light 3D scenes using simple text or image prompts.

Research on Display at SIGGRAPH’s Real-Time Live 

Real-Time Live is one of the most anticipated events at SIGGRAPH. This year, the showcase featured more than a dozen jury-reviewed projects, including those from teams at Roblox, the University of Utah and Metaphysic, a member of the NVIDIA Inception program for cutting-edge startups.

At the event, NVIDIA researchers presented this interactive materials research live, including a demo of the super resolution tool.

Learn about the latest advances in generative AI, graphics and more by joining NVIDIA at SIGGRAPH, running through Thursday, Aug. 10.