country_code

MD Anderson Researchers Harness AI to Transform Cancer Care

Scientists from the leading cancer center tap into the power of AI as they reshape their approach to data.
by Mona Flores
Houston Medical Center and MD Anderson sunrise

To unlock real insights from data, AI and data science research can’t live on the outskirts of an institution — it has to become part of an organization’s core strategy.

The University of Texas MD Anderson Cancer Center, the top-ranked cancer hospital in the U.S., is doing just that, with a new focus on data governance and dozens of researchers pursuing AI-accelerated oncology projects to improve patient care.

“We are focusing on the data in context, ensuring we have a coordinated metadata supply chain to address the current challenges in making AI models translate to impact in the clinic,” said Dr. Caroline Chung, who was recently appointed MD Anderson’s first chief data officer. “To build better and more robust predictive models, we need a coordinated strategy that covers every step from data generation to the clinical use of machine learning insights.”

This data governance strategy will influence the way hospital data is collected and used for insight generation, and enable findability, accessibility, interoperability and reusability of the data.

“It’s a big culture change,” said Chung. “The more data we can capture with contextual information, the more complex questions we can ask and the greater potential we have to use machine learning insights to help our clinicians improve their interactions with patients to guide the data-driven treatment decisions with the best patient outcomes aligned with the goals of care.”

By building a pipeline that collects the high-quality data researchers need, stores it securely and tracks how it’s being used, MD Anderson aims to better support projects to help clinicians analyze radiology data, deliver cancer treatment and predict complications like sepsis.

Many of these projects are already underway, accelerated by the speed of new GPU-powered technologies, such as NVIDIA DGX systems. New investments coming online at MD Anderson will give researchers access to thousands of additional GPU cores to support AI projects across the institution.

Applying AI to Diagnostic Imaging 

The first step in oncology is detecting tumors — the earlier the better. MD Anderson is developing early detection AI applications to help diagnose patients with pancreatic cancer, which has a five-year survival rate of just 10 percent.

“Pancreatic cancer is often diagnosed after it’s already metastasized, meaning it’s spread to other organs,” said Dr. Eugene Koay, co-director of Gastrointestinal Radiation Oncology at MD Anderson. “We’re working on AI models to analyze the pancreas anytime we see it in a CT scan, MRI study or endoscopic ultrasound, whether or not the patient’s appointment is related to the pancreas.”

Not all pancreatic tumors are the same. Some are slow moving, others are aggressive. Some originate from cysts in the pancreas, others don’t.

In collaboration with the Early Detection Research Network, Koay and his team are working on convolutional neural networks that identify which cases are most likely to develop into malignant cancer, so clinicians can better support patients at risk.

Imaging Insights Inform Treatment Planning 

When preparing for radiation therapy to treat cancerous cells, oncologists rely on a process known as contouring to trace the tumors that will be targeted by radiation treatment.

It’s a time-consuming process, and oncologists often have a backlog of radiotherapy treatment plans to create for patients. Dr. Laurence Court, associate professor of Radiation Physics at MD Anderson, hopes to reduce the burden of manual contouring with AI tools, enabling hospitals to treat thousands more cancer patients each year.

He’s especially interested in the impact these AI clinical tools could have in low-resource settings, where a shortage of radiologists and oncologists makes it harder to access lifesaving radiotherapy treatments.

Contouring is also used to plan for MRI-assisted radiosurgery, an advanced form of brachytherapy in which a radiation dose is delivered to cancerous tissue through implanted seeds. MD Anderson radiation oncologist Dr. Steven Frank uses this therapy to treat prostate cancer.

Precise contouring of the prostate and surrounding organs on MRI ensures that radioactive seeds are delivered to the right areas to treat the cancer without harming neighboring tissues.

By adopting an AI model that uses advances in GPU technologies, MD Anderson oncologists have improved the quality of contours for brachytherapy treatment planning and treatment quality assessment, said Dr. Jeremiah Sanders, a medical imaging physics fellow at MD Anderson who’s developing translational AI in Frank’s lab.

Sanders and Frank are also working on a model for use after a brachytherapy procedure — an AI application that analyzes MRI studies of the prostate to determine the quality of the radiation delivery. Insights from this model can help clinicians determine if additional treatment is needed and how to manage patients after their treatments.

Keeping a Watchful AI on Model Accuracy

For an AI model to succeed in a clinical setting, medical researchers need to catch the cases where the neural network struggles and retrain it to improve the application’s performance.

Dr. Kristy Brock, professor of Imaging Physics and Radiation Physics at MD Anderson, is working on an anomaly detection project to determine the cases where an AI model that contours liver tumors from CT scans fails — such as unusual images where a patient has a stent in the liver or fluid around the organ.

By identifying these rare failures, researchers can introduce additional training examples that are similar to cases the neural network previously stumbled on. This continuous training method selectively bolsters training data to improve model performance more efficiently.

“We don’t want to keep collecting data that looks the same as our first 150 scans,” Brock said. “We want to identify cases that will increase the variability of our sample dataset, which in turn boosts the model’s accuracy and generalizability.”

MD Anderson is one of several leading healthcare institutions adopting AI to improve medical research and patient care. Learn more about AI in healthcare at NVIDIA GTC, running online through Nov. 11.

Tune in to a healthcare special address by Kimberly Powell, NVIDIA’s VP of healthcare, on Nov. 9 at 10:30am Pacific. Watch NVIDIA founder and CEO Jensen Huang’s GTC keynote address below. Subscribe to NVIDIA healthcare news here.

AI Blueprint for Video Search and Summarization Now Available to Deploy Video Analytics AI Agents Across Industries

by Adam Scraba

The age of video analytics AI agents is here.

Video is one of the defining features of the modern digital landscape, accounting for over 50% of all global data traffic. Dominant in media and increasingly important for enterprises across industries, it is one of the largest and most ubiquitous data sources in the world. Yet less than 1% of it is analyzed for insights.

Nearly half of global GDP comes from physical industries — spanning energy to automotive and electronics. With labor shortage concerns, manufacturing onshoring efforts and rising demand for automation, video analytics AI agents will play a more critical role than ever, helping bridge the physical and digital worlds.

To accelerate the development of these agents, NVIDIA today is making the AI Blueprint for video search and summarization (VSS), powered by the NVIDIA Metropolis platform, generally available — giving developers the tools to create and deploy highly capable AI agents for analyzing vast sums of real-time and archived videos.

A wave of vision AI agents and productivity assistants powered by vision language models (VLMs) are coming online. Combining powerful computer vision models with the skills of super intelligent large language models (LLMs), these video analytics AI agents allow enterprises to easily see, search and summarize huge volumes of video. By analyzing videos in real time or reviewing terabytes of recorded video, video analytics AI agents are unlocking unprecedented value and opportunities across a range of important industries.

Manufacturers and warehouses are using AI agents to help increase worker safety and productivity. For example, agents can help distribute forklifts and position workers for optimal efficiency. Smart cities are deploying video analytics AI agents to reduce traffic congestion and increase safety, and the uses go on and on.

A Blueprint to Create Diverse Fleets of Video Analytics AI Agents

The VSS blueprint is built on top of the NVIDIA Metropolis platform and boosted by VLMs and LLMs such as NVIDIA VILA and NVIDIA Llama Nemotron, NVIDIA NeMo Retriever microservices, and retrieval-augmented generation (RAG) — a technique that connects LLMs to a company’s enterprise data.

The VSS blueprint incorporates the NVIDIA AI Enterprise software platform, including NVIDIA NIM microservices for VLMs, LLMs and advanced AI frameworks for RAG. With the VSS blueprint, users can summarize a video 100x faster than watching in real time. For example, an hourlong video can be summarized in text in less than one minute.

The VSS blueprint offers a host of powerful features designed to provide robust video understanding, performance and scalability.

This release introduces expanded hardware support, including the ability to deploy on a single NVIDIA A100 or H100 GPU for smaller workloads, offering greater flexibility in resource allocation. The blueprint can also be deployed at the edge on the NVIDIA RTX 6000 PRO and NVIDIA DGX Spark computing platforms.

The VSS blueprint can process hundreds of live video streams or burst clips simultaneously. In addition to visual understanding, it offers audio transcription. Converting speech to text adds contextual depth in scenarios where audio is critical — such as training videos, keynotes or team meetings.

Industry Leaders Deploy Video Analytics AI Agents to Drive Business Value

Everyone from the world’s leading manufacturers to smart cities and sports leagues are using the VSS blueprint to develop AI agents for optimizing operations.

Pegatron, a leading electronics manufacturing company, uses the VSS blueprint to study operating procedures and train employees on best practices. The company is also integrating the blueprint into its PEGAAi platform so organizations can build AI agents to transform manufacturing processes.

These agents can ingest and analyze massive volumes of video, enabling advanced capabilities like automated monitoring, anomaly detection, video search and incident reporting. Pegatron’s Visual Analytics Agent can be used to understand operating procedures for printed circuit board assembly and identify when actions are correct or incorrect. To date, the agents have reduced Pegatron’s labor costs by 7% and defect rates by 67%.

Additional leading Taiwanese semiconductor and electronics manufacturers are building AI agents and digital twins to optimize their planning and operational applications.

Kaohsiung City, Taiwan, is using a unified smart city vision AI application developed by its partner, Linker Vision, to improve incident response times. Previously, city departments such as waste management, transportation and emergency response were isolated by siloed infrastructure — leading to slow response times due to lack of access to critical information.

Powered by the VSS blueprint, Linker Vision’s AI-powered application has agents that combine real-time video analytics with generative AI to not just detect visual elements but also understand and narrate complex urban events like floods or traffic accidents.

Linker Vision currently delivers timely insights to 12 city departments and is on track to scale from 30,000 city cameras to over 50,000 by 2026. These insights are providing improved situational awareness and data-driven decision-making across city services, and reducing incident response times by up to 80%.

The National Hockey League used the VAST InsightEngine with the VSS blueprint to streamline and accelerate vision AI workflows. It manages massive volumes of game footage.

With the VAST InsightEngine, the NHL is positioned to search through petabytes of video in sub-seconds, enabling near-instant retrieval of highlights and in-game moments. AI-driven agentic workflows further enhance content creation by automatically clipping, tagging and assembling video content for ease of access and use.

In the future, the League could potentially use real-time AI reasoning to enable tailored insights — such as player stats, strategy analyses or fantasy recommendations — generated dynamically during live games. This end-to-end automation could transform how media is created, curated and delivered, setting a new standard for AI-driven sports content production.

Siemens is using its Industrial Copilot for Operations to assist factory floor workers with equipment maintenance tasks, error handling and performance optimization. This generative AI-powered assistant offers real-time answers to equipment errors using information about operational and document data.

The copilot was built with a fusion of VSS components like VLMs, LLMs and NVIDIA NeMo microservices. The Industrial Copilot has resulted in rapid decision-making and reduced machine downtime. Siemens has reported a 30% increase in productivity, with the potential to reach 50%.

Supported by an Expanding Partner Ecosystem Creating Sophisticated AI Agents

NVIDIA partners are using the VSS blueprint to expedite the creation of agentic AI video analytics capabilities for their workflows, reducing development time from months to weeks.

Superb AI, a leader in intelligent video analytics, set up a sophisticated airport operations project at Incheon Airport to reduce passenger wait times in a matter of weeks. In Malaysia, solution provider ITMAX is building advanced visual AI agents with the VSS blueprint for the City of Kuala Lumpur to improve overall city management and reduce incident response times.

In the advertising sector, PYLER integrated the VSS blueprint into its brand safety (AiD) and ad targeting (AiM) solutions in just a few weeks. Using AiD and AiM, Samsung Electronics increased advertising effectiveness with brand- and product-aligned, high-value ad placements. BYD saw its ad-click through rates increase 4x by targeting contextually relevant and positive content, while Hana Financial Group surpassed multiple brand campaign goals.

Fingermark is the application provider of Eyecue, a real-time computer vision platform used by quick service restaurants. Fingermark is adding the VSS blueprint into Eyecue to turn video footage into clear, actionable insights regarding drive-thru wait times, service bottlenecks and staff-related incidents at scale.

Try the VSS blueprint on build.nvidia.com and read this technical blog for more details.

Watch the COMPUTEX keynote from NVIDIA founder and CEO Jensen Huang, as well as NVIDIA GTC Taipei 2025 sessions.

Enterprises Ignite Big Savings With NVIDIA-Accelerated Apache Spark

Customers save millions with NVIDIA-accelerated Apache Spark as NVIDIA rolls out Project Aether, enabling enterprises to automatically accelerate their data-center-scale analytics workloads.
by Andrew Feng

Tens of thousands of companies worldwide rely on Apache Spark to crunch massive datasets to support critical operations, as well as predict trends, customer behavior, business performance and more. The faster a company can process and understand its data, the more it stands to make and save.

That’s why companies with massive datasets — including the world’s largest retailers and banks — have adopted NVIDIA RAPIDS Accelerator for Apache Spark. The open-source software runs on top of the NVIDIA accelerated computing platform to significantly accelerate the processing of end-to-end data science and analytics pipelines — without any code changes.

To make it even easier for companies to get value out of NVIDIA-accelerated Spark, NVIDIA today unveiled Project Aether — a collection of tools and processes that automatically qualify, test, configure and optimize Spark workloads for GPU acceleration at scale.

Project Aether Completes a Year’s Worth of Work in Less Than a Week 

Customers using Spark in production often manage tens of thousands of complex jobs, or more. Migrating from CPU-only to GPU-powered computing offers numerous and significant benefits, but can be a manual and time-consuming process.

Project Aether automates the myriad steps that companies previously have done manually, including analyzing all of their Spark jobs to identify the best candidates for GPU acceleration, as well as staging and performing test runs of each job. It uses AI to fine-tune the configuration of each job to obtain the maximum performance.

To understand the impact of Project Aether, consider an enterprise that has 100 Spark jobs to complete. With Project Aether, each of these jobs can be configured and optimized for NVIDIA GPU acceleration in as little as four days. The same process done manually by a single data engineer could take up to an entire year.

CBA Drives AI Transformation With NVIDIA-Accelerated Apache Spark

Running Apache Spark on NVIDIA accelerated computing helps enterprises around the world complete jobs faster and with less hardware compared with using CPUs only — saving time, space, power and cooling, as well as on-premises capital and operational costs in the cloud.

Australia’s largest financial institution, the Commonwealth Bank of Australia, is responsible for processing 60% of the continent’s financial transactions. CBA was experiencing challenges from the latency and costs associated with running its Spark workloads. Using CPU-only computing clusters, the bank estimates it faced nearly nine years of processing time for its training backlog — on top of handling already taxing daily data demands.

“With 40 million inferencing transactions a day, it was critical we were able to process these in a timely, reliable manner,” said Andrew McMullan, chief data and analytics officer at CBA.

Running RAPIDS Accelerator for Apache Spark on GPU-powered infrastructure provided CBA with a 640x performance boost, allowing the bank to process a training of 6.3 billion transactions in just five days. Additionally, on its daily volume of 40 million transactions, CBA is now able to conduct inference in 46 minutes and reduce costs by more than 80% compared with using a CPU-based solution.

McMullan says another value of NVIDIA-accelerated Apache Spark is how it offers his team the compute time efficiency needed to cost-effectively build models that can help CBA deliver better customer service, anticipate when customers may need assistance with home loans and more quickly detect fraudulent transactions.

CBA also plans to use NVIDIA-accelerated Apache Spark to better pinpoint where customers commonly end their digital journeys, enabling the bank to remediate when needed to reduce the rate of abandoned applications.

Global Ecosystem

RAPIDS Accelerator for Apache Spark is available through a global network of partners. It runs on Amazon Web Services, Cloudera, Databricks, Dataiku, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure.

Dell Technologies today also announced the integration of RAPIDS Accelerator for Apache Spark with Dell Data Lakehouse.

To get assistance through NVIDIA Project Aether with a large-scale migration of Apache Spark workloads, apply for access.

To learn more, register for NVIDIA GTC and attend these key sessions featuring Walmart, Capital One, CBA and other industry leaders:

See notice regarding software product information.

AI Maps Titan’s Methane Clouds in Record Time

NVIDIA GPUs powered deep learning to decode years of Cassini data in seconds—helping researchers pioneer a smarter way to explore alien worlds.
by Brian Caulfield

Methane clouds on Titan, Saturn’s largest moon, are more than just a celestial oddity — they’re a window into one of the solar system’s most complex climates.

Until now, mapping them has been slow and grueling work. Enter AI: a team from NASA, UC Berkeley and France’s Observatoire des Sciences de l’Univers just changed the game.

Using NVIDIA GPUs, the researchers trained a deep learning model to analyze years of Cassini data in seconds. Their approach could reshape planetary science, turning what took days into moments.

“We were able to use AI to greatly speed up the work of scientists, increasing productivity and enabling questions to be answered that would otherwise be impractical,” said Zach Yahn, Georgia Tech PhD student and lead author of the study.

Read the full paper, “Rapid Automated Mapping of Clouds on Titan With Instance Segmentation.”

How It Works

At the project’s core is Mask R-CNN — a deep learning model that doesn’t just detect objects. It outlines them pixel by pixel. Trained on hand-labeled images of Titan, it mapped the moon’s elusive clouds: patchy, streaky and barely visible through a smoggy atmosphere.

The team used transfer learning, starting with a model trained on COCO (a dataset of everyday images), and fine-tuned it for Titan’s unique challenges. This saved time and demonstrated how “planetary scientists, who may not always have access to the vast computing resources necessary to train large models from scratch, can still use technologies like transfer learning to apply AI to their data and projects,” Yahn explained.

The model’s potential goes far beyond Titan. “Many other Solar System worlds have cloud formations of interest to planetary science researchers, including Mars and Venus. Similar technology might also be applied to volcanic flows on Io, plumes on Enceladus, linea on Europa and craters on solid planets and moons,” he added.

Fast Science, Powered by NVIDIA

NVIDIA GPUs made this speed possible, processing high-resolution images and generating cloud masks with minimal latency — work that traditional hardware would struggle to handle.

NVIDIA GPUs have become a mainstay for space scientists. They’ve helped analyze Webb Telescope data, model Mars landings and scan for extraterrestrial signals. Now, they’re helping researchers decode Titan.

What’s Next

This AI leap is just the start. Missions like NASA’s Europa Clipper and Dragonfly will flood researchers with data. AI can help handle it, processing it onboard, mid-mission, and even prioritizing findings in real time. Challenges remain, like creating hardware fit for space’s harsh conditions, but the potential is undeniable.

Methane clouds on Titan hold mysteries. Researchers are now unraveling them faster than ever with help from new AI tools accelerated by NVIDIA GPUs.

Read the full paper, “Rapid Automated Mapping of Clouds on Titan With Instance Segmentation.”

Image Credit: NASA Jet Propulsion Laboratory