country_code

NVIDIA Omniverse Enterprise Delivers the Future of 3D Design and Real-Time Collaboration

by Richard Kerris

For millions of professionals around the world, 3D workflows are essential.

Everything they build, from cars to products to buildings, must first be designed or simulated in a virtual world. At the same time, more organizations are tackling complex designs while adjusting to a hybrid work environment.

As a result, design teams need a solution that helps them improve remote collaboration while managing 3D production pipelines. And NVIDIA Omniverse is the answer.

NVIDIA Omniverse Enterprise, now available, helps professionals across industries transform complex 3D design workflows. The groundbreaking platform lets global teams working across multiple software suites collaborate in real time in a shared virtual space.

Designed for the Present, Built for the Future

With Omniverse Enterprise, professionals gain new capabilities to boost traditional visualization workflows. It’s a newly launched subscription that brings fully supported software to 3D organizations of any scale.

The foundation of Omniverse is Pixar’s Universal Scene Description, an open-source file format that enables users to enhance their design process with real-time interoperability across applications. Additionally, the platform is built on NVIDIA RTX technology, so creators can render faster, do multiple iterations at no opportunity cost, and quickly achieve their final designs with stunning, photorealistic detail.

Ericsson, a leading telecommunications company, is using Omniverse Enterprise to create a digital twin of a 5G radio network to simulate and visualize signal propagation and performance. Within Omniverse, Ericsson has built a true-to-reality city-scale simulation environment, bringing in scenes, models and datasets from Esri CityEngine.

A New Experience for 3D Design

Omniverse Enterprise is available worldwide through global computer makers BOXX Technologies, Dell Technologies, HP, Lenovo and Supermicro. Many companies have already experienced the advanced capabilities of the platform.

Epigraph, a leading provider for companies such as Black & Decker, Yamaha and Wayfair, creates physically accurate 3D assets and product experiences for e-commerce. BOXX Technologies helped Epigraph achieve faster rendering with Omniverse Enterprise and NVIDIA RTX A6000 graphics. The advanced RTX Renderer in Omniverse enabled Epigraph to render images at final-frame quality faster, while significantly reducing the amount of computational resources needed.

Media.Monks is exploring ways to enhance and extend their workflows in a virtual world with Omniverse Enterprise, together with HP. The combination of remote computing and collocated workstations enables the Media.Monks design, creative and solutions teams to accelerate their clients’ digital transformation toward a more decentralized future. In collaboration with NVIDIA and HP, Media.Monks is exploring new approaches and the convergence of collaboration, real-time graphics, and live broadcast for a new era of brand virtualization.

Dell Technologies is presenting at GTC to show how Omniverse is advancing the hybrid workforce with Dell Precision workstations, Dell EMC PowerEdge servers and Dell Technologies Validated Designs. The interactive panel discussion will dive into why users need Omniverse today, and how Dell is helping more professionals adopt this solution, from the desktop to the data center.

And Lenovo is showcasing how advanced technologies like Omniverse are making remote collaboration seamless. Whether it’s connecting to a powerful mobile workstation on the go, a physical workstation back in the office, or a virtual workstation in the data center, Lenovo, TGX and NVIDIA are providing remote workers with the same experience they get at the office.

These systems manufacturers have also enabled other Omniverse Enterprise customers such as Kohn Pedersen Fox, Woods Bagot and WPP to improve their efficiency and productivity with real-time collaboration.

Experience Virtual Worlds With NVIDIA Omniverse

NVIDIA Omniverse Enterprise is now generally available by subscription from BOXX Technologies, Dell Technologies, HP, Lenovo and Supermicro.

The platform is optimized and certified to run on NVIDIA RTX professional mobile workstations and NVIDIA-Certified Systems, including desktops and servers on the NVIDIA EGX platform.

With Omniverse Enterprise, creative and design teams can connect their Autodesk 3ds Max, Maya and Revit, Epic Games’ Unreal Engine, McNeel & Associates Rhino, Grasshopper and Trimble SketchUp workflows through live-edit collaboration. Learn more about NVIDIA Omniverse Enterprise and our 30-day evaluation program. For individual artists, there’s also a free beta version of the platform available for download.

Watch NVIDIA founder and CEO Jensen Huang’s GTC keynote address below:

AI Blueprint for Video Search and Summarization Now Available to Deploy Video Analytics AI Agents Across Industries

by Adam Scraba

The age of video analytics AI agents is here.

Video is one of the defining features of the modern digital landscape, accounting for over 50% of all global data traffic. Dominant in media and increasingly important for enterprises across industries, it is one of the largest and most ubiquitous data sources in the world. Yet less than 1% of it is analyzed for insights.

Nearly half of global GDP comes from physical industries — spanning energy to automotive and electronics. With labor shortage concerns, manufacturing onshoring efforts and rising demand for automation, video analytics AI agents will play a more critical role than ever, helping bridge the physical and digital worlds.

To accelerate the development of these agents, NVIDIA today is making the AI Blueprint for video search and summarization (VSS), powered by the NVIDIA Metropolis platform, generally available — giving developers the tools to create and deploy highly capable AI agents for analyzing vast sums of real-time and archived videos.

A wave of vision AI agents and productivity assistants powered by vision language models (VLMs) are coming online. Combining powerful computer vision models with the skills of super intelligent large language models (LLMs), these video analytics AI agents allow enterprises to easily see, search and summarize huge volumes of video. By analyzing videos in real time or reviewing terabytes of recorded video, video analytics AI agents are unlocking unprecedented value and opportunities across a range of important industries.

Manufacturers and warehouses are using AI agents to help increase worker safety and productivity. For example, agents can help distribute forklifts and position workers for optimal efficiency. Smart cities are deploying video analytics AI agents to reduce traffic congestion and increase safety, and the uses go on and on.

A Blueprint to Create Diverse Fleets of Video Analytics AI Agents

The VSS blueprint is built on top of the NVIDIA Metropolis platform and boosted by VLMs and LLMs such as NVIDIA VILA and NVIDIA Llama Nemotron, NVIDIA NeMo Retriever microservices, and retrieval-augmented generation (RAG) — a technique that connects LLMs to a company’s enterprise data.

The VSS blueprint incorporates the NVIDIA AI Enterprise software platform, including NVIDIA NIM microservices for VLMs, LLMs and advanced AI frameworks for RAG. With the VSS blueprint, users can summarize a video 100x faster than watching in real time. For example, an hourlong video can be summarized in text in less than one minute.

The VSS blueprint offers a host of powerful features designed to provide robust video understanding, performance and scalability.

This release introduces expanded hardware support, including the ability to deploy on a single NVIDIA A100 or H100 GPU for smaller workloads, offering greater flexibility in resource allocation. The blueprint can also be deployed at the edge on the NVIDIA RTX 6000 PRO and NVIDIA DGX Spark computing platforms.

The VSS blueprint can process hundreds of live video streams or burst clips simultaneously. In addition to visual understanding, it offers audio transcription. Converting speech to text adds contextual depth in scenarios where audio is critical — such as training videos, keynotes or team meetings.

Industry Leaders Deploy Video Analytics AI Agents to Drive Business Value

Everyone from the world’s leading manufacturers to smart cities and sports leagues are using the VSS blueprint to develop AI agents for optimizing operations.

Pegatron, a leading electronics manufacturing company, uses the VSS blueprint to study operating procedures and train employees on best practices. The company is also integrating the blueprint into its PEGAAi platform so organizations can build AI agents to transform manufacturing processes.

These agents can ingest and analyze massive volumes of video, enabling advanced capabilities like automated monitoring, anomaly detection, video search and incident reporting. Pegatron’s Visual Analytics Agent can be used to understand operating procedures for printed circuit board assembly and identify when actions are correct or incorrect. To date, the agents have reduced Pegatron’s labor costs by 7% and defect rates by 67%.

Additional leading Taiwanese semiconductor and electronics manufacturers are building AI agents and digital twins to optimize their planning and operational applications.

Kaohsiung City, Taiwan, is using a unified smart city vision AI application developed by its partner, Linker Vision, to improve incident response times. Previously, city departments such as waste management, transportation and emergency response were isolated by siloed infrastructure — leading to slow response times due to lack of access to critical information.

Powered by the VSS blueprint, Linker Vision’s AI-powered application has agents that combine real-time video analytics with generative AI to not just detect visual elements but also understand and narrate complex urban events like floods or traffic accidents.

Linker Vision currently delivers timely insights to 12 city departments and is on track to scale from 30,000 city cameras to over 50,000 by 2026. These insights are providing improved situational awareness and data-driven decision-making across city services, and reducing incident response times by up to 80%.

The National Hockey League used the VAST InsightEngine with the VSS blueprint to streamline and accelerate vision AI workflows. It manages massive volumes of game footage.

With the VAST InsightEngine, the NHL is positioned to search through petabytes of video in sub-seconds, enabling near-instant retrieval of highlights and in-game moments. AI-driven agentic workflows further enhance content creation by automatically clipping, tagging and assembling video content for ease of access and use.

In the future, the League could potentially use real-time AI reasoning to enable tailored insights — such as player stats, strategy analyses or fantasy recommendations — generated dynamically during live games. This end-to-end automation could transform how media is created, curated and delivered, setting a new standard for AI-driven sports content production.

Siemens is using its Industrial Copilot for Operations to assist factory floor workers with equipment maintenance tasks, error handling and performance optimization. This generative AI-powered assistant offers real-time answers to equipment errors using information about operational and document data.

The copilot was built with a fusion of VSS components like VLMs, LLMs and NVIDIA NeMo microservices. The Industrial Copilot has resulted in rapid decision-making and reduced machine downtime. Siemens has reported a 30% increase in productivity, with the potential to reach 50%.

Supported by an Expanding Partner Ecosystem Creating Sophisticated AI Agents

NVIDIA partners are using the VSS blueprint to expedite the creation of agentic AI video analytics capabilities for their workflows, reducing development time from months to weeks.

Superb AI, a leader in intelligent video analytics, set up a sophisticated airport operations project at Incheon Airport to reduce passenger wait times in a matter of weeks. In Malaysia, solution provider ITMAX is building advanced visual AI agents with the VSS blueprint for the City of Kuala Lumpur to improve overall city management and reduce incident response times.

In the advertising sector, PYLER integrated the VSS blueprint into its brand safety (AiD) and ad targeting (AiM) solutions in just a few weeks. Using AiD and AiM, Samsung Electronics increased advertising effectiveness with brand- and product-aligned, high-value ad placements. BYD saw its ad-click through rates increase 4x by targeting contextually relevant and positive content, while Hana Financial Group surpassed multiple brand campaign goals.

Fingermark is the application provider of Eyecue, a real-time computer vision platform used by quick service restaurants. Fingermark is adding the VSS blueprint into Eyecue to turn video footage into clear, actionable insights regarding drive-thru wait times, service bottlenecks and staff-related incidents at scale.

Try the VSS blueprint on build.nvidia.com and read this technical blog for more details.

Watch the COMPUTEX keynote from NVIDIA founder and CEO Jensen Huang, as well as NVIDIA GTC Taipei 2025 sessions.

Enterprises Ignite Big Savings With NVIDIA-Accelerated Apache Spark

Customers save millions with NVIDIA-accelerated Apache Spark as NVIDIA rolls out Project Aether, enabling enterprises to automatically accelerate their data-center-scale analytics workloads.
by Andrew Feng

Tens of thousands of companies worldwide rely on Apache Spark to crunch massive datasets to support critical operations, as well as predict trends, customer behavior, business performance and more. The faster a company can process and understand its data, the more it stands to make and save.

That’s why companies with massive datasets — including the world’s largest retailers and banks — have adopted NVIDIA RAPIDS Accelerator for Apache Spark. The open-source software runs on top of the NVIDIA accelerated computing platform to significantly accelerate the processing of end-to-end data science and analytics pipelines — without any code changes.

To make it even easier for companies to get value out of NVIDIA-accelerated Spark, NVIDIA today unveiled Project Aether — a collection of tools and processes that automatically qualify, test, configure and optimize Spark workloads for GPU acceleration at scale.

Project Aether Completes a Year’s Worth of Work in Less Than a Week 

Customers using Spark in production often manage tens of thousands of complex jobs, or more. Migrating from CPU-only to GPU-powered computing offers numerous and significant benefits, but can be a manual and time-consuming process.

Project Aether automates the myriad steps that companies previously have done manually, including analyzing all of their Spark jobs to identify the best candidates for GPU acceleration, as well as staging and performing test runs of each job. It uses AI to fine-tune the configuration of each job to obtain the maximum performance.

To understand the impact of Project Aether, consider an enterprise that has 100 Spark jobs to complete. With Project Aether, each of these jobs can be configured and optimized for NVIDIA GPU acceleration in as little as four days. The same process done manually by a single data engineer could take up to an entire year.

CBA Drives AI Transformation With NVIDIA-Accelerated Apache Spark

Running Apache Spark on NVIDIA accelerated computing helps enterprises around the world complete jobs faster and with less hardware compared with using CPUs only — saving time, space, power and cooling, as well as on-premises capital and operational costs in the cloud.

Australia’s largest financial institution, the Commonwealth Bank of Australia, is responsible for processing 60% of the continent’s financial transactions. CBA was experiencing challenges from the latency and costs associated with running its Spark workloads. Using CPU-only computing clusters, the bank estimates it faced nearly nine years of processing time for its training backlog — on top of handling already taxing daily data demands.

“With 40 million inferencing transactions a day, it was critical we were able to process these in a timely, reliable manner,” said Andrew McMullan, chief data and analytics officer at CBA.

Running RAPIDS Accelerator for Apache Spark on GPU-powered infrastructure provided CBA with a 640x performance boost, allowing the bank to process a training of 6.3 billion transactions in just five days. Additionally, on its daily volume of 40 million transactions, CBA is now able to conduct inference in 46 minutes and reduce costs by more than 80% compared with using a CPU-based solution.

McMullan says another value of NVIDIA-accelerated Apache Spark is how it offers his team the compute time efficiency needed to cost-effectively build models that can help CBA deliver better customer service, anticipate when customers may need assistance with home loans and more quickly detect fraudulent transactions.

CBA also plans to use NVIDIA-accelerated Apache Spark to better pinpoint where customers commonly end their digital journeys, enabling the bank to remediate when needed to reduce the rate of abandoned applications.

Global Ecosystem

RAPIDS Accelerator for Apache Spark is available through a global network of partners. It runs on Amazon Web Services, Cloudera, Databricks, Dataiku, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure.

Dell Technologies today also announced the integration of RAPIDS Accelerator for Apache Spark with Dell Data Lakehouse.

To get assistance through NVIDIA Project Aether with a large-scale migration of Apache Spark workloads, apply for access.

To learn more, register for NVIDIA GTC and attend these key sessions featuring Walmart, Capital One, CBA and other industry leaders:

See notice regarding software product information.

AI Maps Titan’s Methane Clouds in Record Time

NVIDIA GPUs powered deep learning to decode years of Cassini data in seconds—helping researchers pioneer a smarter way to explore alien worlds.
by Brian Caulfield

Methane clouds on Titan, Saturn’s largest moon, are more than just a celestial oddity — they’re a window into one of the solar system’s most complex climates.

Until now, mapping them has been slow and grueling work. Enter AI: a team from NASA, UC Berkeley and France’s Observatoire des Sciences de l’Univers just changed the game.

Using NVIDIA GPUs, the researchers trained a deep learning model to analyze years of Cassini data in seconds. Their approach could reshape planetary science, turning what took days into moments.

“We were able to use AI to greatly speed up the work of scientists, increasing productivity and enabling questions to be answered that would otherwise be impractical,” said Zach Yahn, Georgia Tech PhD student and lead author of the study.

Read the full paper, “Rapid Automated Mapping of Clouds on Titan With Instance Segmentation.”

How It Works

At the project’s core is Mask R-CNN — a deep learning model that doesn’t just detect objects. It outlines them pixel by pixel. Trained on hand-labeled images of Titan, it mapped the moon’s elusive clouds: patchy, streaky and barely visible through a smoggy atmosphere.

The team used transfer learning, starting with a model trained on COCO (a dataset of everyday images), and fine-tuned it for Titan’s unique challenges. This saved time and demonstrated how “planetary scientists, who may not always have access to the vast computing resources necessary to train large models from scratch, can still use technologies like transfer learning to apply AI to their data and projects,” Yahn explained.

The model’s potential goes far beyond Titan. “Many other Solar System worlds have cloud formations of interest to planetary science researchers, including Mars and Venus. Similar technology might also be applied to volcanic flows on Io, plumes on Enceladus, linea on Europa and craters on solid planets and moons,” he added.

Fast Science, Powered by NVIDIA

NVIDIA GPUs made this speed possible, processing high-resolution images and generating cloud masks with minimal latency — work that traditional hardware would struggle to handle.

NVIDIA GPUs have become a mainstay for space scientists. They’ve helped analyze Webb Telescope data, model Mars landings and scan for extraterrestrial signals. Now, they’re helping researchers decode Titan.

What’s Next

This AI leap is just the start. Missions like NASA’s Europa Clipper and Dragonfly will flood researchers with data. AI can help handle it, processing it onboard, mid-mission, and even prioritizing findings in real time. Challenges remain, like creating hardware fit for space’s harsh conditions, but the potential is undeniable.

Methane clouds on Titan hold mysteries. Researchers are now unraveling them faster than ever with help from new AI tools accelerated by NVIDIA GPUs.

Read the full paper, “Rapid Automated Mapping of Clouds on Titan With Instance Segmentation.”

Image Credit: NASA Jet Propulsion Laboratory