Microsoft GPU-Accelerated Virtual Machines in the Cloud Go Public

Four months after Microsoft’s preview of its Azure N-Series virtual machines in the cloud attracted thousands of customers, the GPU-accelerated machines will now be generally available to a wider segment of customers.

Companies in the south-central U.S., eastern U.S., Western Europe and Southeast Asia will gain access to the cutting-edge N-Series virtual machines in December. The rollout is part of an ongoing collaboration by Microsoft and NVIDIA that aims to help companies everywhere benefit from advances in AI and machine learning.

The N-Series machines that are designed for computationally intensive tasks (known as the NC-Series) use our Tesla K80 GPUs and CUDA to run applications like deep learning, real-time data analytics, high performance computing simulation and DNA sequencing.

So far, the accelerated virtual machines with GPUs in the Azure cloud have garnered rave reviews from customers who tried the technology during the preview, Microsoft said. For many, the GPU-accelerated performance was a game-changer.

GPUs Speed Deep Learning Financial Analysis

One satisfied customer is Noonum, a startup that uses deep neural networks to bring the kind of sophisticated financial analysis used by hedge funds and large investment firms to smaller  institutional investors and investment advisors.

Tesla K80 GPUs
The virtual machines use our Tesla K80 GPUs and CUDA for compute-intensive tasks.

“Azure’s GPUs have been a boon for our data pipeline,” said Shankar Vaidyanathan, Noonum’s  founder and CEO.

When the market closes each day, Noonum downloads data on several hundred economic indicators and feeds it into its deep neural networks to analyze and forecast the impact of the data on all of the S&P 500 stocks.

“These neural networks need to be trained on new data before our customers wake up the next morning on the East Coast, and Azure’s GPU options help make that possible,” Vaidyanathan said. Using the N-Series virtual machines with NVIDIA Tesla K80 GPUs, the company cut its training time in half and could ensure its predictions are on time for customers.

GPUs for Real-Time Radio Search

Another preview customer, AudioBurst, is making it possible to search, share and integrate audio clips from broadcast media such as radio and online video. It does this by using natural language understanding and a compute-intensive process called Automatic Speech Recognition (ASR) to create live transcriptions of speech so it can be indexed.

“On a non-GPU server, you can manage to run a single stream of live ASR transcription,” said Gal Klein, the company’s co-founder and chief technology officer.

Before switching to the N-Series virtual machines in the Azure cloud, AudioBurst was able to transcribe eight live radio feeds concurrently. After moving to the Tesla K80s on Azure, the company was able to transcribe four times as many feeds.

“For a company like AudioBurst, the ability to digest as many concurrent live radio feeds as possible is crucial,” Klein said. “This gives us the capability to let users search and access radio content a minute after it was aired.”

For more information, see Microsoft’s blog post on general availability for the GPU virtual machines.

Similar Stories