by Sanford Russell

Microsoft today made an announcement that will accelerate the adoption of GPU computing (that is, the use of GPUs as a companion processor to CPUs). The software maker is working on a new programming language extension, called C++ AMP, with a focus on accelerating applications with GPUs.

With Microsoft now embracing GPUs in their future higher-level language and OS roadmap, it makes the decision to go with GPU computing even easier for those programmers still on the fence.

Its intent with C++ AMP is to expose C++ language capabilities to millions of Windows developers with the goal of enabling them to take advantage of GPUs. It promises to give millions of C++ developers the option of using Microsoft Visual Studio-based development tools to accelerate applications using the parallel processing power of GPUs. CUDA C and CUDA C++ will continue to be the preferred platform for Linux apps or demanding HPC (high performance computing) applications that need to maximize performance.

In the Spring 2007, there was just one language (CUDA C) supporting NVIDIA GPUs. Fast forward to today and our customers now have a much wider selection of languages and APIs for GPU computing – CUDA C, CUDA C++, CUDA Fortran, OpenCL, DirectCompute and in the future Microsoft C++ AMP. There are even Java and Python wrappers, as well as.NET integration, available that sit on top of CUDA C or CUDA C++.

If you are a Windows C++ developer looking at GPU Computing for the first time, there is no need to wait. Visual C++ developers today use our high performance CUDA C++ with the Thrust C++ template library to easily accelerate applications by parallelizing as little as 1 to 5 percent of their application code and mapping it to NVIDIA GPUs. CUDA C++ comes with a rich eco-system of profilers, debuggers, and libraries like cuFFT, cuBLAS, LAPACK, cuSPARSE, cuRAND, etc. NVIDIA’s Parallel Nsight™ for Visual Studio 2010 provides these Windows developers a familiar development environment, combined with excellent GPU profiling and debugging tools.

The take away from Microsoft’s announcement today is that the GPU computing space has reached maturity, with the company that produces the world’s most widely used commercial C++ developer tools – Microsoft — completely embracing GPU computing in their core tools. Rest assured, NVIDIA continues to work closely with Microsoft to help make C++ AMP a success, and we will continue to deliver the best GPU developer tools and training.

Stay tuned for more details.



Similar Stories

  • Suraj AB

    MOAR good news for GPU computing!

  • Habo

    No, actually it’s bad news: It’s from Micro$oft

  • Abhishek Deshpande

     Hahahhaa… 😀 😀 😀

  • Anonymous

    Ahh, the ol’ “Microsoft is evil because they make a lot of money” hate. So sad. Microsoft does great things, it’s computer manufacturers like HP that are horrible. Giving Windows a bad names with their poor components and shitty bundled software that bogs down the systems.

    The founder of Microsoft started and runs one of the worlds biggest Charities. So that already puts Microsoft in a better place in my books than Apple, that’s for sure. Apple does so many evil things it’s ridiculous, they are detrimental to an open, free to choose computing future.

  • Nicolas Capens

    With Intel’s recent announcement of its Haswell New Instructions, I don’t think there’s any future for GPGPU for consumers. FMA and gather instructions will make the CPU very efficient at throughput oriented workloads.

  • Jarrod Smith

    I bet they make it just different enough so that cross platform compatibility becomes a relentless nightmare for us, or at best a fleeting daydream where sensical, unified technology development results in better quality software and greater productivity for all. Ahhh crap I just woke up.

  • Jack Thursby

    So i guess that makes C++0x a bad thing too, since Herb Sutter is the head of the C++ Standards Committee? Also C++ AMP will be an open standard too. So everybody can develop his own implementation of it for any plattform he want. It absolutely irrelevant if it’s from MS or not.

  • Justin Shidell

    I’m pretty excited by the possibilities of this in accelerating applications, but as a rather green developer myself, I have a few questions:

    Is this primarily of benefit only on floating point operations? 

    Could a developer expect that any heavy loop operations could be parallelized to a benefit? Even operations performing tasks such as perhaps comparing integers, or comparing strings, blobs, etc.? 

    Assuming parallelization on GPGPU is still effective on any sort of iterative task sequence, does this mean that investing in even a low-end, PCI-based solution ($20) and adding it into an older PC or Server could mean a massive increase in performance? (In particular, I have a ~7 year old server that performs some heavy calculation, and if I can make a massive increase in performance for $100 by purchasing five PCI-based DX11 Nvidia cards, that’s a lot easier to pitch to mgmt than an entirely new server.)

    Lastly, is this limited to DX11 cards only? I know that HLSL support in previous versions of parallelism would be based on the Shader support; for example, 5.0 vs. 4.1 vs 4.0, etc. I’m curious if C++ AMP may be able to gain benefit from even older chipsets, for example, DX10 models.

  • Daniel Moth

    Justin, this is DX11 and later only.

  • Calisa Cole

    Members of the Silicon Valley C++ community are cordially invited to attend a joint Microsoft/NVIDIA event on Wed., June 29 on the topic  of C++ technologies for heterogeneous computing.

    5:45 PM | Welcome by Will Ramey, NVIDIA
    6:00 PM | Heterogeneous Parallelism in General, C++ in AMP in Particular, presented by Herb Sutter, Principal Architect for Windows C++, Microsoft
    7:15 PM | ALM tools for C++ in Visual Studio V.NEXT, presented by Rong Lu, Program Manager C++, Microsoft
    8:00 PM | The Power of Parallel, presented by the NVIDIA Team• Parallel Nsight: Programming GPUs in Visual Studio, Stephen Jones, NVIDIA • CUDA 4.0: Parallel Programming Made Easy, Justin Luitjens, NVIDIA• Thrust: C++ Template Library for GPGPUs, Jared Hoberock, NVIDIA

    Date and time: Wednesday, June 29, 2011 at 5:45 PM
    Venue: NVIDIA Headquarters, Building E, Santa Clara, CA
    Refreshments: Beverages & snacks will be provided
    Register through EventBrite: