Thousands more developers, data scientists and researchers can now jumpstart their GPU computing projects, following today’s announcement that Microsoft Azure is a supported platform with NVIDIA GPU Cloud (NGC).
Ready-to-run containers from NGC with Azure give developers access to on-demand GPU computing that scales to their need, and eliminates the complexity of software integration and testing.
Getting AI and HPC Projects Up and Running Faster
Building and testing reliable software stacks to run popular deep learning software — such as TensorFlow, Microsoft Cognitive Toolkit, PyTorch and NVIDIA TensorRT — is challenging and time consuming. There are dependencies at the operating system level and with drivers, libraries and runtimes. And many packages recommend differing versions of the supporting components.
To make matters worse, the frameworks and applications are updated frequently, meaning this work has to be redone every time a new version rolls out. Ideally, you’d test the new version to ensure it provides the same or better performance as before. And all of this is before you can even get started with a project.
For HPC, the difficulty is how to deploy the latest software to clusters of systems. In addition to finding and installing the correct dependencies, testing and so forth, you have to do this in a multi-tenant environment and across many systems.
NGC removes this complexity by providing pre-configured containers with GPU-accelerated software. Its deep learning containers benefit from NVIDIA’s ongoing R&D investment to make sure the containers take advantage of the latest GPU features. And we test, tune and optimize the complete software stack in the deep learning containers with monthly updates to ensure the best possible performance.
NVIDIA also works closely with the community and framework developers, and contributes back to open source projects. We made more than 800 contributions in 2017 alone. And we work with the developers of the other containers available on NGC to optimize their applications, and we test them for performance and compatibility.
NGC with Microsoft Azure
You can access 35 GPU-accelerated containers for deep learning software, HPC applications, HPC visualization tools and a variety of partner applications from the NGC container registry and run them on the following Microsoft Azure instance types with NVIDIA GPUs:
- NCv3 (1, 2 or 4 NVIDIA Tesla V100 GPUs)
- NCv2 (1, 2 or 4 NVIDIA Tesla P100 GPUs)
- ND (1, 2 or 4 NVIDIA Tesla P40 GPUs)
The same NGC containers work across Azure instance types, even with different types or quantities of GPUs.
Using NGC containers with Azure is simple.
Just go to the Microsoft Azure Marketplace and find the NVIDIA GPU Cloud Image for Deep Learning and HPC (this is a pre-configured Azure virtual machine image with everything needed to run NGC containers). Launch a compatible NVIDIA GPU instance on Azure. Then, pull the containers you want from the NGC registry into your running instance. (You’ll need to sign up for a free NGC account first.) Detailed information is in the “Using NGC with Microsoft Azure” documentation.
In addition to using NVIDIA published images on Azure Marketplace to run these NGC containers, Azure Batch AI can also be used to download and run these containers from NGC on Azure NCv2, NCv3 and ND virtual machines. Follow these simple GitHub instructions to start with Batch AI with NGC containers.
With NGC support for Azure, we are making it even easier for everyone to start with AI or HPC in cloud. See how easy it is for yourself.