country_code

What Is Federated Learning?

Federated learning is a way to develop and validate AI models from diverse data sources while mitigating the risk of compromising data security or privacy, as the data never leaves individual sites.
by Nicola Rieke
AI healthcare

Editor’s note: On April 16, 2024, we updated our original post on federated learning, which was first published October 13, 2019. 

The key to becoming a medical specialist, in any discipline, is experience.

Knowing how to interpret symptoms, which move to make next in critical situations, and which treatment to provide — it all comes down to the training you’ve had and the opportunities you’ve had to apply it.

For AI algorithms, experience comes in the form of large, varied, high-quality datasets. But such datasets have traditionally proved hard to come by, especially in the area of healthcare.

Federated learning is a way to develop and validate accurate, generalizable AI models from diverse data sources while mitigating the risk of compromising data security or privacy. It enables AI models to be built with a consortium of data providers without the data ever leaving individual sites.

Medical institutions have had to rely on their own data sources, which can be biased by, for example, patient demographics, the instruments used or clinical specializations. Or they’ve needed to pool data from other institutions to gather all of the information they need, which requires managing regulatory issues.

Federated learning makes it possible for AI algorithms to gain experience from a vast range of data located at different sites.

The approach enables several organizations to collaborate on the development of models, but without needing to directly share sensitive clinical data with each other.

Over the course of several training iterations the shared models get exposed to a significantly wider range of data than what any single organization possesses in-house.

Federated learning is gaining traction beyond healthcare, moving into financial services, cybersecurity, transportation, high performance computing, energy, drug discovery and other fields.

Frameworks such as NVIDIA FLARE (NVFlare) have enabled enterprises to collaborate by contributing data through federated learning for model improvements.

NVFlare, an open-source federated learning framework that’s widely adopted across various applications, offers a diverse range of examples of machine learning and deep learning algorithms. It includes robust security features, advanced privacy protection techniques and a flexible system architecture — building trust among users.

How Federated Learning Works 

The main concept of federated learning is to train models locally without sharing data, only the model parameters.

The aggregator starts with an initial global model and broadcasts the model parameters to all clients. The client node receives the global model parameters and starts training the received model on local data. Then, the newly trained local model is sent back to the aggregator node. Only model parameters, no private data, are shared with the aggregator.

The aggregator node will perform aggregation, such as weighted average, to produce a new global model. That new global model will be broadcast again by repeating the first step until convergence, or until it’s reached the max number of rounds.

AI algorithms deployed in medical scenarios ultimately need to reach clinical-grade accuracy. Largely speaking, this means that they meet, or exceed, the gold standard for the application to which they’re applied.

To be considered an expert in a particular medical field, you generally need to have clocked 15 years on the job. Such an expert has probably read around 15,000 cases in a year, which adds up to around 225,000 over their career.

When you consider rare diseases, which affect around one in 2,000 people, even an expert with three decades’ experience will have only seen roughly 100 cases of a particular condition.

To train models that meet the same grade as medical experts, the AI algorithms need to be fed a large number of cases. And these examples need to sufficiently represent the clinical environment in which they’ll be used.

But currently the largest open dataset contains 100,000 cases.

And it’s not only the amount of data that counts. It also needs to be very diverse and incorporate samples from patients of different genders, ages, demographics and environmental exposures.

Individual healthcare institutes may have archives containing hundreds of thousands of records and images, but these data sources are typically kept siloed. This is largely because health data is private and cannot be used without the necessary patient consent and ethical approval.

Federated learning decentralizes deep learning by removing the need to pool data into a single location. Instead, the model is trained in multiple iterations at different sites.

For example, say three hospitals decide to team up and build a model to help automatically analyze brain tumor images.

If they chose to work with a client-server federated approach, a centralized server would maintain the global deep neural network and each participating hospital would be given a copy to train on their own dataset.

Once the model had been trained locally for a couple of iterations, the participants would send their updated version of the model back to the centralized server and keep their dataset within their own secure infrastructure.

The central server would then aggregate the contributions from all of the participants. The updated parameters would then be shared with the participating institutes, so that they could continue local training.

A centralized-server approach to federated learning.

If one of the hospitals decided it wanted to leave the training team, this would not halt the training of the model, as it’s not reliant on any specific data. Similarly, a new hospital could choose to join the initiative at any time.

This is just one of many approaches to federated learning. The common thread through all approaches is that every participant gains global knowledge from local data — everybody wins.

Why Federated Learning?

Federated learning still requires careful implementation to ensure that patient data is kept secure. But it has the potential to tackle some of the challenges faced by approaches that require the pooling of sensitive clinical data.

For federated learning, clinical data doesn’t need to be taken outside an institution’s own security measures. Every participant keeps control of its own clinical data.

As this makes it harder to extract sensitive patient information, federated learning opens up the possibility for teams to build larger, more diverse datasets for training their AI algorithms.

Implementing a federated learning approach also encourages different hospitals, healthcare institutions and research centers to collaborate on building a model that could benefit them all.

How Federated Learning Can Transform Industries

Federated learning could revolutionize how AI models are trained, with the benefits also filtering out into the wider healthcare ecosystem.

Larger hospital networks would be able to work better together and benefit from access to secure, cross-institutional data. While smaller community and rural hospitals would enjoy access to expert-level AI algorithms.

It could bring AI to the point of care, enabling large volumes of diverse data from across different organizations to be included in model development, while complying with local governance of the clinical data.

Clinicians would have access to more robust AI algorithms, based on data that represents a wider demographic of patients for a particular clinical area or from rare cases that they would not have come across locally. They’d also be able to contribute back to the continued training of these algorithms whenever they disagreed with the outputs.

Healthcare startups could bring cutting-edge innovations to market faster, thanks to a secure approach to learning from more diverse algorithms.

Meanwhile, research institutions would be able to direct their work toward actual clinical needs, based on a wide variety of real-world data, rather than the limited supply of open datasets.

Large-scale federated learning projects are now starting, hoping to improve drug discovery and bring AI benefits to the point of care.

MELLODDY, a drug-discovery consortium based in the U.K., aims to demonstrate how federated learning techniques could give pharmaceutical partners the best of both worlds: the ability to leverage the world’s largest collaborative drug compound dataset for AI training without sacrificing data privacy.

King’s College London is hoping that its work with federated learning, as part of its London Medical Imaging and Artificial Intelligence Centre for Value-Based Healthcare project, could lead to breakthroughs in classifying stroke and neurological impairments, determining the underlying causes of cancers, and recommending the best treatment for patients.

In the context of financial services, federated learning can be applied to train a model using data from several banks to estimate individual transaction risk scores while keeping personal information locally at the banks.

Fraud detection is an important federated learning use case for banking and insurance. Institutions can harness data from user accounts and fraud cases to create better fraud-detection models without sacrificing user data privacy.

This can be challenging without federated learning, considering data privacy protection laws such as the EU’s GRPR, China’s PIPL and the recent EU AI Act, which prohibits cross-border data sharing. With federated learning, financial institutions can comply with these laws and regulations while using rich, private datasets for better, safer outcomes.

NVFlare can be used with XGBoost and Kaggle’s Credit Card Fraud Detection dataset for securing credit card transactions and with graph neural networks (GNNs) for financial transaction classification.

Federated learning is also applicable in use cases such as federated data analytics on edge medical devices, cross-board data training with autonomous vehicle models and drug discovery. Driven by data privacy regulations, the need to build better models with more private data, as well as the generative AI boom, the adoption of federal learning is accelerating.

Learn more about NVFlare. Explore more about federated learning on related NVIDIA technical blogs. And discover the science behind the approach, in this paper.

 

Into the Omniverse: How Industrial AI and Digital Twins Accelerate Design, Engineering and Manufacturing Across Industries

by James McKenna
Into the Omniverse imagery with an egocentric car view and industrial factories.

Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advancements in OpenUSD and NVIDIA Omniverse.

Industrial AI, digital twins, AI physics and accelerated AI infrastructure are empowering companies across industries to accelerate and scale the design, simulation and optimization of products, processes and facilities before building in the real world.

Earlier this month, NVIDIA and Dassault Systèmes announced a partnership that brings together Dassault Systèmes’ Virtual Twin platforms, NVIDIA accelerated computing, AI physics open models and NVIDIA CUDA-X and Omniverse libraries. This allows designers and engineers to use virtual twins and companions — trained on physics-based world models — to innovate faster, boost efficiency and deliver sustainable products.

Dassault Systèmes’ SIMULIA software now uses NVIDIA CUDA-X and AI physics libraries for AI-based virtual twin physics behavior — empowering designers and engineers to accurately and instantly predict outcomes in simulation.

NVIDIA is adopting Dassault Systèmes’ model-based systems engineering technologies to accelerate the design and global deployment of gigawatt-scale AI factories that are powering industrial and physical AI across industries. Dassault Systèmes will in turn deploy NVIDIA-powered AI factories on three continents through its OUTSCALE sovereign cloud, enabling its customers to run AI workloads while maintaining data residency and security requirements.

These efforts are already making a splash across industries, accelerating industrial development and production processes.

Industrial AI Simulations, From Car Parts to Cheese Proteins 

Digital twins, also known as virtual twins, and physics-based world models are already being deployed to advance industries.

In automotive, Lucid Motors is combining cutting-edge simulation, AI physics open models, Dassault Systèmes’ tools for vehicle and powertrain engineering and digital twin technology to accelerate innovation in electric vehicles. 

In life sciences, scientists and researchers are using virtual twins, Dassault Systèmes’ science-validated world models and the NVIDIA BioNeMo platform to speed molecule and materials discovery, therapeutics design and sustainable food development.

The Bel Group is using technologies from Dassault Systèmes’ supported by NVIDIA to accelerate the development and production of healthier, more sustainable foods for millions of consumers. 

The company is using Dassault Systèmes’ industry world models to generate and study food proteins, creating non-dairy protein options that pair with its well-known cheeses, including Babybel. Using accurate, high-resolution virtual twins allows the Bel Group to study and develop validated research outcomes of food proteins more quickly and efficiently.

Using accurate, high-resolution virtual twins allows the Bel Group to study and develop validated research outcomes of food proteins more quickly and efficiently.

In industrial automation, Omron is using virtual twins and physical AI to design and deploy automation technology with greater confidence — advancing the shift toward digitally validated production. 

In the aerospace industry, researchers and engineers at Wichita State University’s National Institute for Aviation Research use virtual twins and AI companions powered by Dassault Systèmes’ Industry World Models and NVIDIA Nemotron open models to accelerate the design, testing and certification of aircrafts.

Learning From and Simulating the Real World 

Dassault Systemes’ physics-based Industry World Models are trained to have PhD-level knowledge in fields like biology, physics and material sciences. This allows them to accurately simulate real-world environments and scenarios so teams can test industrial operations end to end — from supply chains to store shelves — before deploying changes in the real world. 

These virtual models can help researchers and developers with workflows ranging from DNA sequencing to strengthening manufactured materials for vehicles. 

“Knowledge is encoded in the living world,” said Pascal Daloz, CEO of Dassault Systemes, during his 3DEXPERIENCE World keynote. “With our virtual twins, we are learning from life and are also understanding it in order to replicate it and scale it.” 

Get Plugged In to Industrial AI

Learn more about industrial and physical AI by registering for NVIDIA GTC, running March 16-19 in San Jose, kicking off with NVIDIA founder and CEO Jensen Huang’s keynote address on Monday, March 16, at 11 a.m. PT. 

At the conference:

  • Explore an industrial AI agenda packed with hands-on sessions, customer stories and live demos. 
  • Dive into the world of OpenUSD with a special session focused on OpenUSD for physical AI simulation, as well as a full agenda of hands-on OpenUSD learning sessions
  • Find Dassault Systèmes in the industrial AI and robotics pavilion on the show floor and learn from Florence Hu-Aubigny, executive vice president of R&D at Dassault Systemes, who’ll present on how virtual twins are shaping the next industrial revolution.
  • Get a live look at GTC with our developer community livestream on March 18, where participants can ask questions, request deep dives and talk directly with NVIDIA engineers in the chat.

Learn how to build industrial and physical AI applications by attending these sessions at GTC.

NVIDIA Virtualizes Game Development With RTX PRO Server

NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs centralize compute infrastructure for content creation, AI, engineering and quality assurance, delivering workstation-class performance at data center scale for game studios.
by Paul Logan

Game development teams are working across larger worlds, more complex pipelines and more distributed teams than ever. At the same time, many studios still rely on fixed, desk-bound GPU hardware for critical production work.

At the Game Developers Conference (GDC) this week in San Francisco, NVIDIA is showcasing a new approach to bring together disparate workflows using virtualized game development on NVIDIA RTX PRO Servers, powered by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and NVIDIA vGPU software.

With the RTX PRO Server, studios can centralize and virtualize core workflows across creative, engineering, AI research and quality assurance (QA) — all on shared GPU infrastructure in the data center. 

This enables teams to maintain the responsiveness and visual fidelity they expect from workstation-class systems while improving infrastructure utilization, scalability, data security and operational consistency across teams and locations.

Simplifying Complex Workflows

As game development studios scale, hardware can often sit underutilized in one location while other teams wait to access it for production work. QA capacity is hard to expand quickly. Over time, workstation hardware, drivers and tools diverge, making bugs harder to reproduce. AI workloads are often isolated on separate infrastructure, creating more operational overhead. 

The NVIDIA RTX PRO Server helps studios move from workstation-by-workstation scaling to centralized GPU infrastructure. Studios can pool resources, allocate performance by workload and support parallel development, testing and AI workflows without expanding physical workstation sprawl.

Centralized GPU infrastructure enables studios to run AI training, simulation and game automation workloads overnight, then dynamically reallocate the same resources to interactive development during the day, improving overall utilization and reducing idle capacity.

The NVIDIA RTX PRO Server supports virtualized workflows for 3D graphics and AI across the game development lifecycle for:

  • Artists: Providing virtual RTX workstations for traditional 3D and generative AI content-creation workflows.
  • Developers: Powering consistent, high-performance engineering environments for coding and 3D development.
  • AI researchers: Offering large-memory GPU profiles for fine-tuning, inference and AI agents.
  • QA teams: Enabling scalable game validation and performance testing using the same NVIDIA Blackwell architecture used by GeForce RTX 50 Series GPUs.

This allows studios to support multiple teams — including across sites and contractors — on one common GPU platform, improving collaboration and reducing debugging issues that can arise from disparate hardware.

Supporting AI and Engineering on Shared Infrastructure

AI is becoming a core part of everyday game development, spanning coding, content creation, testing and live operations. As these workflows expand, studios need infrastructure that can support AI alongside traditional graphics workloads without introducing separate, siloed systems.

With the RTX PRO Server, studios can support coding agents, internal model experimentation and AI-assisted production workflows without spinning up a separate AI stack for every team.

The NVIDIA RTX PRO 6000 Blackwell Server Edition GPU features a massive 96GB memory buffer, enabling developers to run multiple demanding applications simultaneously while supporting AI inference on larger models directly alongside real-time graphics workflows.

NVIDIA Multi-Instance GPU (MIG) technology partitions a single GPU into isolated instances with dedicated memory, compute and cache resources. Combined with NVIDIA vGPU software, MIG can help studios securely allocate GPU capacity across users and workloads. In combined MIG and vGPU configurations, a single RTX PRO 6000 Blackwell Server Edition GPU can support up to 48 concurrent users, maximizing utilization while maintaining performance isolation.

Enterprise-Ready Deployment for Game Studios

NVIDIA RTX PRO Servers are designed for enterprise-grade data-center operations. Studios can deploy virtual workstations on RTX PRO Servers via NVIDIA vGPU on supported hypervisor and remote workstation platforms.

That means RTX PRO Servers can fit into studios’ existing infrastructure and IT practices, rather than requiring one-off deployments.

Major game publishers already use NVIDIA vGPU technology to scale centralized development infrastructure and improve efficiency at studio scale.

Learn more about the NVIDIA RTX PRO Server.

See these workflows live by joining NVIDIA’s booth 1426 at GDC or attending NVIDIA GTC, running March 16-19 in San Jose, California. 

See notice regarding software product information.

GeForce NOW Raises the Game at the Game Developers Conference

Dive into all the latest announcements for GeForce NOW and catch five new games in the cloud, including the latest entry in ‘Monster Hunter Stories’ and Fortnite’s ‘Save The World’ update.
by GeForce NOW Community
GDC news on GeForce NOW

GeForce NOW is bringing the game to the Game Developers Conference (GDC), running this week in San Francisco. While developers build the future of gaming, GeForce NOW is delivering it to gamers. The latest updates bring smoother performance, easier game discovery and a fresh lineup of blockbuster titles to the cloud.

Game discoverability gets a boost with new in‑app labels for connected accounts for Xbox Game Pass and Ubisoft+. It’ll be easier than ever to see titles already available through linked subscriptions, so members can seamlessly jump into games they already own.

Virtual reality gets a smooth upgrade — supported devices now stream at 90 frames per second (fps), up from 60 fps, delivering more responsive and immersive virtual reality (VR) experiences.

Account linking is also leveling up. Following Gaijin single sign-on announced at CES in January, GOG account linking and game library syncing are coming soon.

The GeForce NOW library continues to grow with new releases joining the cloud at launch: CONTROL Resonant and Samson: A Tyndalston Story. Plus, select Xbox titles will join the Install-to-Play library.

In addition, there’s a lineup of five new games to catch this week, including Capcom’s Monster Hunter Stories 3: Twisted Reflection, on top of the latest update for Fortnite.

Gaming Is Buzzing

GeForce NOW is rolling into GDC with an easier way to keep track of titles, as well as performance upgrades and a growing lineup of major titles ready to stream at launch.

Keeping track of which game lives on which service can be tricky. In‑App labels — coming soon to GeForce NOW for connected subscriptions — will help make it simple for members to know exactly what games they can play on GeForce NOW. Once a member connects their Xbox Game Pass Account or Ubisoft+ account, clear labels will appear directly on the game art inside the GeForce NOW app — eliminating guesswork and making it easy to see exactly what’s available to play from their game subscription services.

GOG and Gaijin SSO coming to GeForce NOW
Set it and forget it.

Account linking is expanding too. On top of Gaijin single sign-on, GeForce NOW is adding GOG account linking and game library syncing in the coming months.              

90fps VR gaming on GeForce NOW
Smooth moves.

Virtual reality is also getting an upgrade. Starting Thursday, March 19, VR devices that GeForce NOW supports, including Apple Vision Pro, Meta Quest and Pico devices, will stream at 90 fps for Ultimate members, an increase from 60 fps. The higher frame rate enhances smoothness, responsiveness and realism across every session — whether gamers are chasing enemies through neon-lit streets or exploring far‑flung alien worlds.

GeForce NOW’s Install‑to‑Play library is also expanding with select Xbox titles, including Brutal Legend from Double Fine Productions and Contrast from Compulsion Games. These additions bring more flexibility for members to download and install their owned games alongside streaming favorites.

That’s just the start. Highly anticipated games are headed to the cloud at launch:

CONTROL Resonant coming to GeForce NOW
Bending reality.

CONTROL Resonant — Remedy’s upcoming action‑adventure role-playing game (RPG) that blends supernatural powers with a warped Manhattan facing a reality-bending cosmic threat.

Samson coming to GeForce NOW
Unravel a family story steeped in myths.

Samson: A Tyndalston Story — the game from Liquid Swords is a gritty action brawler, set in the city of Tyndalston, launching on PC.

Free to Save the World

Fortnite save the world on GeForce NOW
Chaos in the cloud.

Fortnite’s original adventure is back in the spotlight — and soon, it’ll free to play. Fortnite first launched in 2017 as a story-driven co‑op experience, and on Thursday, April 16, the “Save the World” update will officially be free to play for all players. Pre-registration begins on Thursday, March 12.

Join forces against hordes of husks, solo or with the squad, in a player vs. environment action-packed story, complete with gathering, crafting and collecting. Pick a favored playstyle with four distinct classes to choose from, over 150 heroes and weapons to upgrade, and loadout customization options to hone builds even further. With hundreds of updates since its original launch and over 100 hours of content, squads can build, grind gear and engineer elaborate homebase defenses to keep the Storm King at bay. “Save the World” isn’t available on mobile devices, including tablets.

On GeForce NOW, Fortnite “Save the World” streams straight from the cloud — no waiting around for updates or patches. Low‑latency streaming keeps building, shooting and trap placement feeling snappy across supported devices. Stay in the action with GeForce NOW.

Gear Up for Glory

Battlefield 6 reward on GeForce NOW
The cloud makes it easy to suit up in style.

From chaotic infantry clashes to roaring jet dogfights, every match is an unpredictable explosion of strategy and mayhem in EA’s Battlefield 6

This week, GeForce NOW Ultimate members can drop into the action with serious style — a new reward, the Advancing Gloom Soldier Skin, gives soldiers a sleek, battle-hardened look fit for the frontlines. Members can claim it in their GeForce NOW account portals, redeem it at EA.com/redeem, then show up ready in true Ultimate fashion. It’s available through Sunday, April 12, or while supplies last.

Being a GeForce NOW member pays off. Whether streaming on the go or maxing out graphics in the cloud, members get exclusive rewards to keep and flaunt.

Start the Games

MH3 Twister Reflection on GeForce NOW
Twin monsters, one cloud.

Twin Rathalos, born in a twist of fate, set the stage for the third entry in the Monster Hunter Stories RPG series, launching on GeForce NOW. Monster Hunter Stories 3: Twisted Reflection is an RPG series set in the Monster Hunter world, where players can become a Rider, and raise and bond with their favorite monsters. Play it instantly on GeForce NOW and take the adventure anywhere, on any device.

In addition, members can look for the following:

  • Warcraft I: Remastered (New release on Ubisoft, March 11)
  • Warcraft II: Remastered (New release on Ubisoft, March 11)
  • 1348 Ex Voto (New release on Steam, March 12, GeForce RTX 5080-ready)
  • John Carpenter’s Toxic Commando (New release on Steam, March 12, GeForce RTX 5080-ready)
  • Monster Hunter Stories 3: Twisted Reflection (New release on Steam, March 12, GeForce RTX 5080-ready)

This week’s additional GeForce RTX 5080-ready game, on top of the addition of John Carpenter’s Toxic Commando, 1348 Ex Voto and Monster Hunter Stories 3: Twisted Reflection:

  • Greedfall: The Dying World 1.0 (Steam, GeForce RTX 5080-ready)

What are you planning to play this weekend? Let us know on X or in the comments below.