Data scientists working in data analytics, machine learning and deep learning will get a massive speed boost with NVIDIA’s new CUDA-X AI libraries.
Unlocking the flexibility of Tensor Core GPUs, CUDA-X AI accelerates:
- … data science from ingest of data, to ETL, to model training, to deployment.
- … machine learning algorithms for regression, classification, clustering.
- … every deep learning training framework and, with this release, automatically optimizes for NVIDIA Tensor Core GPUs.
- … inference and large-scale Kubernetes deployment in the cloud.
- … data science in PC, workstation, supercomputers cloud, and enterprise data centers.
- … data science in Amazon Web Services, Google Cloud and Microsoft Azure AI services.
- … data science.
CUDA-X AI accelerates data science.
Introduced today at NVIDIA’s GPU Technology Conference, CUDA-X AI is the only end-to-end platform for the acceleration of data science.
CUDA-X AI arrives as businesses turn to AI — deep learning, machine learning and data analytics — to make data more useful.
The typical workflow for all these: data processing, feature determination, training, verification and deployment.
CUDA-X AI unlocks the flexibility of our NVIDIA Tensor Core GPUs to uniquely address this end-to-end AI pipeline.
Capable of speeding up machine learning and data science workloads by as much as 50x, CUDA-X AI consists of more than a dozen specialized acceleration libraries.
It’s already accelerating data analysis with cuDF, deep learning primitives with cuDNN; machine learning algorithms with cuML; and data processing with DALI, among others.
Together, these libraries accelerate every step in a typical AI workflow, whether it involves using deep learning to train speech and image recognition systems or data analytics to assess the risk profile of a mortgage portfolio.
Each step in these workflows requires processing large volumes of data, and each step benefits from GPU accelerated computing.
Broad Adoption
As a result, CUDA-X AI is relied on by top companies such as Charter, Microsoft, PayPal, SAS and Walmart.
It’s integrated into major deep learning frameworks such as TensorFlow, PyTorch and MXNet.
Major cloud service providers around the world use CUDA-X AI to speed up their cloud services.
And today eight of the world’s leading computer makers announced data science workstations and servers optimized to run NVIDIA’s CUDA-X AI libraries.
Available Everywhere
CUDA-X AI acceleration libraries are freely available as individual downloads or as containerized software stacks from the NVIDIA NGC software hub.
They can be deployed everywhere, including desktops, workstations, servers and on cloud computing platforms.
It’s integrated into all the data science workstations announced at GTC today. And, all the NVIDIA T4 servers announced today are optimized to run CUDA-X AI.
Learn more at https://www.nvidia.com/en-us/technologies/cuda-x.