NVIDIA CUDA-X AI Acceleration Libraries Speed Up Machine Learning in the Cloud by 20x; Available Now on Microsoft Azure

by Jeff Tseng

Data scientists can now accelerate their machine learning projects by up to 20x using NVIDIA CUDA-X AI, NVIDIA’s data science acceleration libraries, on Microsoft Azure.

With just a few clicks, businesses of all sizes can accelerate their data science, turning enormous amounts of data into their competitive advantage faster than ever before.

Microsoft Azure Machine Learning (AML) service is the first major cloud platform to integrate RAPIDS, a key component of NVIDIA CUDA-X AI. With access to the RAPIDS open source suite of libraries, data scientists can do predictive analytics with unprecedented speed using NVIDIA GPUs on AML service.

RAPIDS on AML service dramatically boosts performance for the many businesses across a wide range of industries that are using machine learning to create predictive AI models from their vast amounts of data. These include retailers that want to manage inventories better, financial institutions that want to make smarter financial projections, and healthcare organizations that want to detect disease faster and lower administration costs.

Businesses using RAPIDS on AML service can reduce the time it takes to train their AI models by up to 20x, slashing training times from days to hours or from hours to minutes, depending on their dataset size. This is the first time RAPIDS has been integrated natively into a cloud data science platform.

Walmart is an early adopter of RAPIDS, using it to improve the accuracy of its forecasts.

“RAPIDS software has the potential to significantly scale our feature engineering processes – enabling us to run our most complex machine learning models to further improve our forecast accuracy,” said Srini Venkatesan, senior vice president of Supply Chain Technology and Cloud at Walmart. “We’re excited that Azure Machine Learning service is partnering with NVIDIA to offer RAPIDS and GPU-powered compute for data scientists so we can run RAPIDS in the Azure cloud.”

RAPIDS on AML service comes in the form of a Jupyter Notebook that through the use of the AML service SDK creates a resource group, workspace, cluster and an environment with the right configurations and libraries for the use of RAPIDS code. Template scripts are provided to enable the user to experiment with different data sizes and number of GPUs as well to set up a CPU baseline.

“Our vision is to deliver the best technology that helps customers do transformative work,” said Eric Boyd, corporate vice president of Azure AI at Microsoft. “Azure Machine Learning service is the leading platform for building and deploying machine learning models, and we’re excited to help data scientists unlock significant performance gains with Azure paired with NVIDIA’s GPU acceleration.”

Learn more about NVIDIA CUDA-X AI acceleration libraries.

Check out Microsoft Azure’s blog or attend this GTC session to learn about using RAPIDS on Azure Machine Learning service.