As part of our effort to speed the deployment of GPU-accelerated high-performance computing and AI, we’ve more than tripled the number of containers available from our NVIDIA GPU Cloud (NGC) since launch last year.
Users can now take advantage of 35 deep learning, high-performance computing, and visualization containers from NGC, a story we’ll be telling in depth at this week’s International Supercomputing Conference in Frankfurt.
Over the past three years, containers have become a crucial tool in deploying applications on a shared cluster and speeding the work, especially for researchers and data scientists running AI workloads.
These containers make deploying deep learning frameworks — building blocks for designing, training and validating deep neural networks — faster and easier.
Installing frameworks is complicated and time consuming.
Containers simplify this process, so users can get access to the latest application versions with simple pull and run commands.
The complex deployment challenge also applies to HPC computing and visualization applications.
Moving Fast: New NGC Containers Include CHROMA, CANDLE, PGI and VMD
Since November’s Supercomputing Conference, nine new HPC and visualization containers — including CHROMA, CANDLE, PGI and VMD — have been added to NGC. This is in addition to eight containers, including NAMD, GROMACS and ParaView, launched at the previous year’s conference.
The container for PGI compilers available on NGC will help developers build HPC applications targeting multicore CPUs and NVIDIA Tesla GPUs. PGI compilers and tools enable development of performance-portable HPC applications using OpenACC, OpenMP and CUDA Fortran parallel programming.
Users clearly see the value of NGC containers, with over 27,000 users now registered to access the NGC container registry.
Containers Speed Discoveries
The need for containers isn’t limited to deep learning. Supercomputing has a dire need to simplify the deployment of applications across all the segments. That’s because almost all supercomputing centers use environment modules to build, deploy and launch applications.
This is a time-consuming approach that can take days, making it unproductive for both the system administrators and the end-users.
The complexity of such installs in supercomputing limits users from accessing the latest features and enjoying optimized performance, in turn delaying discoveries.
Containers Simplify Application Deployment on Shared Systems
Containers make a great alternative. Installations are eliminated, which means no one has to keep track of or be concerned about breaking the environment module links.
Users can pull the containers themselves and deploy an application in minutes compared to waiting for days for the advisory council to agree on an install and go through the actual process.
System administrators can now focus on mission critical tasks rather than servicing and maintaining applications.
Additional key benefits of containers are that they provide reproducibility and portability. Users can run their workloads on various systems without installing the application and get equivalent simulation results. This is especially helpful to verify results for publishing research papers.
NGC Drives Productivity and Accelerates Discoveries
The applications within the NGC containers are GPU accelerated, giving far better performance than CPU systems.
Users have access to the latest versions of HPC applications, and the deep learning framework containers are updated and optimized by NVIDIA across the complete software stack monthly to deliver maximum performance on NVIDIA GPUs.
Finally, these containers are tested on various systems including GPU-powered workstations, NVIDIA DGX systems, and on NVIDIA GPUs on supported cloud service providers, including Amazon Web Services, Google Cloud platform and Oracle Cloud Infrastructure, for a smooth user experience.