country_code

What Are Graph Neural Networks?

GNNs apply the predictive power of deep learning to rich data structures that depict objects and their relationships as points connected by lines in a graph.
by Rick Merritt
Graph neural networks (GNNs) and their applications

When two technologies converge, they can create something new and wonderful — like cellphones and browsers were fused to forge smartphones.

Today, developers are applying AI’s ability to find patterns to massive graph databases that store information about relationships among data points of all sorts. Together they produce a powerful new tool called graph neural networks.

What Are Graph Neural Networks?

Graph neural networks apply the predictive power of deep learning to rich data structures that depict objects and their relationships as points connected by lines in a graph.

In GNNs, data points are called nodes, which are linked by lines — called edges — with elements expressed mathematically so machine learning algorithms can make useful predictions at the level of nodes, edges or entire graphs.

What Can GNNs Do?

An expanding list of companies is applying GNNs to improve drug discovery, fraud detection and recommendation systems. These applications and many more rely on finding patterns in relationships among data points.

Researchers are exploring use cases for GNNs in computer graphics, cybersecurity, genomics and materials science. A recent paper reported how GNNs used transportation maps as graphs to improve predictions of arrival time.

Many branches of science and industry already store valuable data in graph databases. With deep learning, they can train predictive models that unearth fresh insights from their graphs.

Example uses of graph neural networks
Knowledge from many fields of science and industry can be expressed as graphs.

“GNNs are one of the hottest areas of deep learning research, and we see an increasing number of applications take advantage of GNNs to improve their performance,” said George Karypis, a senior principal scientist at AWS, in a talk earlier this year.

Others agree. GNNs are “catching fire because of their flexibility to model complex relationships, something traditional neural networks cannot do,” said Jure Leskovec, an associate professor at Stanford, speaking in a recent talk, where he showed the chart below of AI papers that mention them.

Recent papers on graph neural networks

Who Uses Graph Neural Networks?

Amazon reported in 2017 on its work using GNNs to detect fraud. In 2020, it rolled out a public GNN service that others could use for fraud detection, recommendation systems and other applications.

To maintain their customers’ high level of trust, Amazon Search employs GNNs to detect malicious sellers, buyers and products. Using NVIDIA GPUs, it’s able to explore graphs with tens of millions of nodes and hundreds of millions of edges while reducing training time from 24 to five hours.

For its part, biopharma company GSK maintains a knowledge graph with nearly 500 billion nodes that is used in many of its machine-language models, said Kim Branson, the company’s global head of AI, speaking on a panel at a GNN workshop.

LinkedIn uses GNNs to make social recommendations and understand the relationships between people’s skills and their job titles, said Jaewon Yang, a senior staff software engineer at the company, speaking on another panel at the workshop.

“GNNs are general-purpose tools, and every year we discover a bunch of new apps for them,” said Joe Eaton, a distinguished engineer at NVIDIA who is leading a team applying accelerated computing to GNNs. “We haven’t even scratched the surface of what GNNs can do.”

In yet another sign of the interest in GNNs, videos of a course on them that Leskovec teaches at Stanford have received more than 700,000 views.

How Do GNNs Work?

To date, deep learning has mainly focused on images and text, types of structured data that can be described as sequences of words or grids of pixels. Graphs, by contrast, are unstructured. They can take any shape or size and contain any kind of data, including images and text.

Using a process called message passing, GNNs organize graphs so machine learning algorithms can use them.

Message passing embeds into each node information about its neighbors. AI models employ the embedded information to find patterns and make predictions.

Message passing in GNNs
Example dataflows in three types of GNNs.

For example, recommendation systems use a form of node embedding in GNNs to match customers with products. Fraud detection systems use edge embeddings to find suspicious transactions, and drug discovery models compare entire graphs of molecules to find out how they react to each other.

GNNs are unique in two other ways: They use sparse math, and the models typically only have two or three layers. Other AI models generally use dense math and have hundreds of neural-network layers.

Example pipeline for a graph neural network
A GNN pipeline has a graph as an input and predictions as outputs.

What’s the History of GNNs?

A 2009 paper from researchers in Italy was the first to give graph neural networks their name. But it took eight years before two researchers in Amsterdam demonstrated their power with a variant they called a graph convolutional network (GCN), which is one of the most popular GNNs today.

The GCN work inspired Leskovec and two of his Stanford grad students to create GraphSage, a GNN that showed new ways the message-passing function could work. He put it to the test in the summer of 2017 at Pinterest, where he served as chief scientist.

The GraphSage graph neural network
GraphSage pioneered powerful aggregation techniques for message passing in GNNs.

Their implementation, PinSage, was a recommendation system that packed 3 billion nodes and 18 billion edges to outperform other AI models at that time.

Pinterest applies it today on more than 100 use cases across the company. “Without GNNs, Pinterest would not be as engaging as it is today,” said Andrew Zhai, a senior machine learning engineer at the company, speaking on an online panel.

Meanwhile, other variants and hybrids have emerged, including graph recurrent networks and graph attention networks. GATs borrow the attention mechanism defined in transformer models to help GNNs focus on portions of datasets that are of greatest interest.

Variations of graph neural networks
One overview of GNNs depicted a family tree of their variants.

Scaling Graph Neural Networks

Looking forward, GNNs need to scale in all dimensions.

Organizations that don’t already maintain graph databases need tools to ease the job of creating these complex data structures.

Those who use graph databases know they’re growing in some cases to have thousands of features embedded on a single node or edge. That presents challenges of efficiently loading the massive datasets from storage subsystems through networks to processors.

“We’re delivering products that maximize the memory and computational bandwidth and throughput of accelerated systems to address these data loading and scaling issues,” said Eaton.

As part of that work, NVIDIA announced at GTC it is now supporting PyTorch Geometric (PyG) in addition to the Deep Graph Library (DGL). These are two of the most popular GNN software frameworks.

NVIDIA tools for creating graph neural networks
NVIDIA provides multiple tools to accelerate building GNNs.

NVIDIA-optimized DGL and PyG containers are performance-tuned and tested for NVIDIA GPUs. They provide an easy place to start developing applications using GNNs.

To learn more, watch a talk on accelerating and scaling GNNs with DGL and GPUs by Da Zheng, a senior applied scientist at AWS. In addition, NVIDIA engineers hosted separate talks on accelerating GNNs with DGL and PyG.

To get started today, sign up for our early access program for DGL and PyG.

Into the Omniverse: How Industrial AI and Digital Twins Accelerate Design, Engineering and Manufacturing Across Industries

by James McKenna
Into the Omniverse imagery with an egocentric car view and industrial factories.

Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advancements in OpenUSD and NVIDIA Omniverse.

Industrial AI, digital twins, AI physics and accelerated AI infrastructure are empowering companies across industries to accelerate and scale the design, simulation and optimization of products, processes and facilities before building in the real world.

Earlier this month, NVIDIA and Dassault Systèmes announced a partnership that brings together Dassault Systèmes’ Virtual Twin platforms, NVIDIA accelerated computing, AI physics open models and NVIDIA CUDA-X and Omniverse libraries. This allows designers and engineers to use virtual twins and companions — trained on physics-based world models — to innovate faster, boost efficiency and deliver sustainable products.

Dassault Systèmes’ SIMULIA software now uses NVIDIA CUDA-X and AI physics libraries for AI-based virtual twin physics behavior — empowering designers and engineers to accurately and instantly predict outcomes in simulation.

NVIDIA is adopting Dassault Systèmes’ model-based systems engineering technologies to accelerate the design and global deployment of gigawatt-scale AI factories that are powering industrial and physical AI across industries. Dassault Systèmes will in turn deploy NVIDIA-powered AI factories on three continents through its OUTSCALE sovereign cloud, enabling its customers to run AI workloads while maintaining data residency and security requirements.

These efforts are already making a splash across industries, accelerating industrial development and production processes.

Industrial AI Simulations, From Car Parts to Cheese Proteins 

Digital twins, also known as virtual twins, and physics-based world models are already being deployed to advance industries.

In automotive, Lucid Motors is combining cutting-edge simulation, AI physics open models, Dassault Systèmes’ tools for vehicle and powertrain engineering and digital twin technology to accelerate innovation in electric vehicles. 

In life sciences, scientists and researchers are using virtual twins, Dassault Systèmes’ science-validated world models and the NVIDIA BioNeMo platform to speed molecule and materials discovery, therapeutics design and sustainable food development.

The Bel Group is using technologies from Dassault Systèmes’ supported by NVIDIA to accelerate the development and production of healthier, more sustainable foods for millions of consumers. 

The company is using Dassault Systèmes’ industry world models to generate and study food proteins, creating non-dairy protein options that pair with its well-known cheeses, including Babybel. Using accurate, high-resolution virtual twins allows the Bel Group to study and develop validated research outcomes of food proteins more quickly and efficiently.

Using accurate, high-resolution virtual twins allows the Bel Group to study and develop validated research outcomes of food proteins more quickly and efficiently.

In industrial automation, Omron is using virtual twins and physical AI to design and deploy automation technology with greater confidence — advancing the shift toward digitally validated production. 

In the aerospace industry, researchers and engineers at Wichita State University’s National Institute for Aviation Research use virtual twins and AI companions powered by Dassault Systèmes’ Industry World Models and NVIDIA Nemotron open models to accelerate the design, testing and certification of aircrafts.

Learning From and Simulating the Real World 

Dassault Systemes’ physics-based Industry World Models are trained to have PhD-level knowledge in fields like biology, physics and material sciences. This allows them to accurately simulate real-world environments and scenarios so teams can test industrial operations end to end — from supply chains to store shelves — before deploying changes in the real world. 

These virtual models can help researchers and developers with workflows ranging from DNA sequencing to strengthening manufactured materials for vehicles. 

“Knowledge is encoded in the living world,” said Pascal Daloz, CEO of Dassault Systemes, during his 3DEXPERIENCE World keynote. “With our virtual twins, we are learning from life and are also understanding it in order to replicate it and scale it.” 

Get Plugged In to Industrial AI

Learn more about industrial and physical AI by registering for NVIDIA GTC, running March 16-19 in San Jose, kicking off with NVIDIA founder and CEO Jensen Huang’s keynote address on Monday, March 16, at 11 a.m. PT. 

At the conference:

  • Explore an industrial AI agenda packed with hands-on sessions, customer stories and live demos. 
  • Dive into the world of OpenUSD with a special session focused on OpenUSD for physical AI simulation, as well as a full agenda of hands-on OpenUSD learning sessions
  • Find Dassault Systèmes in the industrial AI and robotics pavilion on the show floor and learn from Florence Hu-Aubigny, executive vice president of R&D at Dassault Systemes, who’ll present on how virtual twins are shaping the next industrial revolution.
  • Get a live look at GTC with our developer community livestream on March 18, where participants can ask questions, request deep dives and talk directly with NVIDIA engineers in the chat.

Learn how to build industrial and physical AI applications by attending these sessions at GTC.

NVIDIA Virtualizes Game Development With RTX PRO Server

NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs centralize compute infrastructure for content creation, AI, engineering and quality assurance, delivering workstation-class performance at data center scale for game studios.
by Paul Logan

Game development teams are working across larger worlds, more complex pipelines and more distributed teams than ever. At the same time, many studios still rely on fixed, desk-bound GPU hardware for critical production work.

At the Game Developers Conference (GDC) this week in San Francisco, NVIDIA is showcasing a new approach to bring together disparate workflows using virtualized game development on NVIDIA RTX PRO Servers, powered by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and NVIDIA vGPU software.

With the RTX PRO Server, studios can centralize and virtualize core workflows across creative, engineering, AI research and quality assurance (QA) — all on shared GPU infrastructure in the data center. 

This enables teams to maintain the responsiveness and visual fidelity they expect from workstation-class systems while improving infrastructure utilization, scalability, data security and operational consistency across teams and locations.

Simplifying Complex Workflows

As game development studios scale, hardware can often sit underutilized in one location while other teams wait to access it for production work. QA capacity is hard to expand quickly. Over time, workstation hardware, drivers and tools diverge, making bugs harder to reproduce. AI workloads are often isolated on separate infrastructure, creating more operational overhead. 

The NVIDIA RTX PRO Server helps studios move from workstation-by-workstation scaling to centralized GPU infrastructure. Studios can pool resources, allocate performance by workload and support parallel development, testing and AI workflows without expanding physical workstation sprawl.

Centralized GPU infrastructure enables studios to run AI training, simulation and game automation workloads overnight, then dynamically reallocate the same resources to interactive development during the day, improving overall utilization and reducing idle capacity.

The NVIDIA RTX PRO Server supports virtualized workflows for 3D graphics and AI across the game development lifecycle for:

  • Artists: Providing virtual RTX workstations for traditional 3D and generative AI content-creation workflows.
  • Developers: Powering consistent, high-performance engineering environments for coding and 3D development.
  • AI researchers: Offering large-memory GPU profiles for fine-tuning, inference and AI agents.
  • QA teams: Enabling scalable game validation and performance testing using the same NVIDIA Blackwell architecture used by GeForce RTX 50 Series GPUs.

This allows studios to support multiple teams — including across sites and contractors — on one common GPU platform, improving collaboration and reducing debugging issues that can arise from disparate hardware.

Supporting AI and Engineering on Shared Infrastructure

AI is becoming a core part of everyday game development, spanning coding, content creation, testing and live operations. As these workflows expand, studios need infrastructure that can support AI alongside traditional graphics workloads without introducing separate, siloed systems.

With the RTX PRO Server, studios can support coding agents, internal model experimentation and AI-assisted production workflows without spinning up a separate AI stack for every team.

The NVIDIA RTX PRO 6000 Blackwell Server Edition GPU features a massive 96GB memory buffer, enabling developers to run multiple demanding applications simultaneously while supporting AI inference on larger models directly alongside real-time graphics workflows.

NVIDIA Multi-Instance GPU (MIG) technology partitions a single GPU into isolated instances with dedicated memory, compute and cache resources. Combined with NVIDIA vGPU software, MIG can help studios securely allocate GPU capacity across users and workloads. In combined MIG and vGPU configurations, a single RTX PRO 6000 Blackwell Server Edition GPU can support up to 48 concurrent users, maximizing utilization while maintaining performance isolation.

Enterprise-Ready Deployment for Game Studios

NVIDIA RTX PRO Servers are designed for enterprise-grade data-center operations. Studios can deploy virtual workstations on RTX PRO Servers via NVIDIA vGPU on supported hypervisor and remote workstation platforms.

That means RTX PRO Servers can fit into studios’ existing infrastructure and IT practices, rather than requiring one-off deployments.

Major game publishers already use NVIDIA vGPU technology to scale centralized development infrastructure and improve efficiency at studio scale.

Learn more about the NVIDIA RTX PRO Server.

See these workflows live by joining NVIDIA’s booth 1426 at GDC or attending NVIDIA GTC, running March 16-19 in San Jose, California. 

See notice regarding software product information.

GeForce NOW Raises the Game at the Game Developers Conference

Dive into all the latest announcements for GeForce NOW and catch five new games in the cloud, including the latest entry in ‘Monster Hunter Stories’ and Fortnite’s ‘Save The World’ update.
by GeForce NOW Community
GDC news on GeForce NOW

GeForce NOW is bringing the game to the Game Developers Conference (GDC), running this week in San Francisco. While developers build the future of gaming, GeForce NOW is delivering it to gamers. The latest updates bring smoother performance, easier game discovery and a fresh lineup of blockbuster titles to the cloud.

Game discoverability gets a boost with new in‑app labels for connected accounts for Xbox Game Pass and Ubisoft+. It’ll be easier than ever to see titles already available through linked subscriptions, so members can seamlessly jump into games they already own.

Virtual reality gets a smooth upgrade — supported devices now stream at 90 frames per second (fps), up from 60 fps, delivering more responsive and immersive virtual reality (VR) experiences.

Account linking is also leveling up. Following Gaijin single sign-on announced at CES in January, GOG account linking and game library syncing are coming soon.

The GeForce NOW library continues to grow with new releases joining the cloud at launch: CONTROL Resonant and Samson: A Tyndalston Story. Plus, select Xbox titles will join the Install-to-Play library.

In addition, there’s a lineup of five new games to catch this week, including Capcom’s Monster Hunter Stories 3: Twisted Reflection, on top of the latest update for Fortnite.

Gaming Is Buzzing

GeForce NOW is rolling into GDC with an easier way to keep track of titles, as well as performance upgrades and a growing lineup of major titles ready to stream at launch.

Keeping track of which game lives on which service can be tricky. In‑App labels — coming soon to GeForce NOW for connected subscriptions — will help make it simple for members to know exactly what games they can play on GeForce NOW. Once a member connects their Xbox Game Pass Account or Ubisoft+ account, clear labels will appear directly on the game art inside the GeForce NOW app — eliminating guesswork and making it easy to see exactly what’s available to play from their game subscription services.

GOG and Gaijin SSO coming to GeForce NOW
Set it and forget it.

Account linking is expanding too. On top of Gaijin single sign-on, GeForce NOW is adding GOG account linking and game library syncing in the coming months.              

90fps VR gaming on GeForce NOW
Smooth moves.

Virtual reality is also getting an upgrade. Starting Thursday, March 19, VR devices that GeForce NOW supports, including Apple Vision Pro, Meta Quest and Pico devices, will stream at 90 fps for Ultimate members, an increase from 60 fps. The higher frame rate enhances smoothness, responsiveness and realism across every session — whether gamers are chasing enemies through neon-lit streets or exploring far‑flung alien worlds.

GeForce NOW’s Install‑to‑Play library is also expanding with select Xbox titles, including Brutal Legend from Double Fine Productions and Contrast from Compulsion Games. These additions bring more flexibility for members to download and install their owned games alongside streaming favorites.

That’s just the start. Highly anticipated games are headed to the cloud at launch:

CONTROL Resonant coming to GeForce NOW
Bending reality.

CONTROL Resonant — Remedy’s upcoming action‑adventure role-playing game (RPG) that blends supernatural powers with a warped Manhattan facing a reality-bending cosmic threat.

Samson coming to GeForce NOW
Unravel a family story steeped in myths.

Samson: A Tyndalston Story — the game from Liquid Swords is a gritty action brawler, set in the city of Tyndalston, launching on PC.

Free to Save the World

Fortnite save the world on GeForce NOW
Chaos in the cloud.

Fortnite’s original adventure is back in the spotlight — and soon, it’ll free to play. Fortnite first launched in 2017 as a story-driven co‑op experience, and on Thursday, April 16, the “Save the World” update will officially be free to play for all players. Pre-registration begins on Thursday, March 12.

Join forces against hordes of husks, solo or with the squad, in a player vs. environment action-packed story, complete with gathering, crafting and collecting. Pick a favored playstyle with four distinct classes to choose from, over 150 heroes and weapons to upgrade, and loadout customization options to hone builds even further. With hundreds of updates since its original launch and over 100 hours of content, squads can build, grind gear and engineer elaborate homebase defenses to keep the Storm King at bay. “Save the World” isn’t available on mobile devices, including tablets.

On GeForce NOW, Fortnite “Save the World” streams straight from the cloud — no waiting around for updates or patches. Low‑latency streaming keeps building, shooting and trap placement feeling snappy across supported devices. Stay in the action with GeForce NOW.

Gear Up for Glory

Battlefield 6 reward on GeForce NOW
The cloud makes it easy to suit up in style.

From chaotic infantry clashes to roaring jet dogfights, every match is an unpredictable explosion of strategy and mayhem in EA’s Battlefield 6

This week, GeForce NOW Ultimate members can drop into the action with serious style — a new reward, the Advancing Gloom Soldier Skin, gives soldiers a sleek, battle-hardened look fit for the frontlines. Members can claim it in their GeForce NOW account portals, redeem it at EA.com/redeem, then show up ready in true Ultimate fashion. It’s available through Sunday, April 12, or while supplies last.

Being a GeForce NOW member pays off. Whether streaming on the go or maxing out graphics in the cloud, members get exclusive rewards to keep and flaunt.

Start the Games

MH3 Twister Reflection on GeForce NOW
Twin monsters, one cloud.

Twin Rathalos, born in a twist of fate, set the stage for the third entry in the Monster Hunter Stories RPG series, launching on GeForce NOW. Monster Hunter Stories 3: Twisted Reflection is an RPG series set in the Monster Hunter world, where players can become a Rider, and raise and bond with their favorite monsters. Play it instantly on GeForce NOW and take the adventure anywhere, on any device.

In addition, members can look for the following:

  • Warcraft I: Remastered (New release on Ubisoft, March 11)
  • Warcraft II: Remastered (New release on Ubisoft, March 11)
  • 1348 Ex Voto (New release on Steam, March 12, GeForce RTX 5080-ready)
  • John Carpenter’s Toxic Commando (New release on Steam, March 12, GeForce RTX 5080-ready)
  • Monster Hunter Stories 3: Twisted Reflection (New release on Steam, March 12, GeForce RTX 5080-ready)

This week’s additional GeForce RTX 5080-ready game, on top of the addition of John Carpenter’s Toxic Commando, 1348 Ex Voto and Monster Hunter Stories 3: Twisted Reflection:

  • Greedfall: The Dying World 1.0 (Steam, GeForce RTX 5080-ready)

What are you planning to play this weekend? Let us know on X or in the comments below.