During last week’s engaging live chat session, Bill Dally and David Luebke responded to a number of great questions on a variety of important topics related to GPU computing technologies and research, NVIDIA’s graphics processing technologies and the upcoming GPU Technology Conference (GTC). If you missed it, you can re-play the full session here.

Unfortunately, as hard as they tried, Bill and David simply couldn’t respond to all of the questions they received in the time available. So, I wanted to take this opportunity to share their responses to a few additional questions from the session. You can find these answers below.

Thanks again to everyone who participated in this session. Be sure to stay tuned to nTersect for news, opinion and insight on all things GPU.

And, we hope to see you at GTC 2010, which is just a couple short weeks away!

Question from Michela:
Accuracy of scientific applications is a key aspect in parallel programming and there has been some effort in implementing tools for verification of programs used in computational science, including numerically-intensive message-passing-based parallel programs. Is there a similar effort for GPU programming?

Bill Dally:
Yes, but we need a lot more. GPUs already employ more parallelism than all but the largest MPI applications, posing substantial scalability problems to most of the existing techniques. Despite this, a streaming model of parallelism can provide some advantages to many symbolic formal verification techniques.  Uncertainty quantification techniques are also important, but  most existing techniques used in MPI applications are also suitable for the GPU.

Question from Christian:
Do you see any chances in the following year that ray tracing will get somehow implemented in one of the big graphics interface packages such as DirectX/DirectCompute or OpenCL/OpenGL, so more people can use and benefit from it?

David Luebke:
DirectX and OpenGL evolved to provide flexible and performance-friendly abstractions of the rasterization-based “forward” rendering pipeline. We think such a programmable ray tracing “pipeline” is an incredibly important goal for the future of interactive rendering, which is why we created OptiX (http://developer.nvidia.com/object/optix-home.html and http://research.nvidia.com/publication/optix-general-purpose-ray-tracing-engine). We see such a ray tracing pipeline as complementing, not competing with, the existing rasterization-oriented pipelines – in many cases the best algorithm is a hybrid that exploits rasterization’s extremely efficient point-to-scene visibility couple with ray tracing’s very powerful scene-to-scene visibility queries. For this reason OptiX provides OpenGL and DirectX interoperability and there are samples of each in the OptiX SDK.  As for whether GL/DX could evolve to include ray tracing, my personal view is that it is a mistake to try and design ray tracing API that looks exactly like familiar rasterization APIs – they are very different algorithms and the natural abstractions are different. I think providing both APIs, and a flexible low-level computing platform like CUDA upon which developers can code their own innovative algorithms, is the right answer.

Question from Anton:
Do you expect technologies like ray tracing will become part of consumer graphics applications 5-10 years from now? What exactly is needed to drive them to the mainstream market and which of these requirements/tools do we have today? When approximately do you expect this to happen?

David Luebke:
Ray tracing already plays a large role in games and interactive applications, mostly during a pre-process phase but sometimes also per-frame. As GPUs become more programmable, this trend is likely to continue but the power advantages and tool ecosystem for rasterization is hard to ignore. OptiX can be used for hybrid rendering today, and we believe that the future will employ a broad range of algorithms which will be limited only by your ingenuity.

Question from Yong:
It would be great if NV-GPU-Affinity or other multi-display support can be moved into GeForce GTX series cards, instead of being limited to Quadro cards. Does NVIDIA have plans to do so?

David Luebke:
Multi-display support is an increasingly important technology on the consumer (gaming) side and GeForce supports best-in-class surround gaming including 3D Vision Surround. Right now we see GPU affinity as a more specialized capability that tends to come up only in the professional and “boutique” applications that our Quadro professional solutions are designed to address.

Also, David answered a question from Christian about the well-known “Marching Cubes” algorithm during the Live Chat, but wanted to provide a bit of additional information:

David Luebke:
Chris Dyken and Gernot Ziegler will present a talk at the GPU Technology Conference in a couple weeks about their CUDA-based marching cubes application. This is probably the fastest marching cubes implementation in the world, as Chris writes that it is roughly 2-5 times faster than their previous OpenGL-based approach. If you’re interested, they have put up a small teaser video here: http://www.youtube.com/watch?v=69yKfh0JLqk.

The GPU Technology Conference (GTC) takes place Sept. 20-23 at the San Jose Convention. You can stay up to date by following the GTC blog RSS feed, signing up for our email list or joining our GTC Facebook fan page.