What’s the Difference Between Ray Tracing and Rasterization?

by Brian Caulfield

What is ray tracing? Just go to your nearest multiplex, plunk down a twenty and pick up some popcorn.

There may not be many people outside of computer graphics who know what ray tracing is, but there are very few people on the planet who haven’t seen it.

Ray tracing is the technique modern movies rely on to generate or enhance special effects. Think realistic reflections, refractions and shadows. Getting these right makes starfighters in sci-fi epics scream. It makes fast cars look furious. It makes the fire, smoke and explosions of war films look real.

Ray tracing produces images that can be indistinguishable from those captured by a camera. Live-action movies blend computer-generated effects and images captured in the real world seamlessly, while animated feature films cloak digitally generated scenes in light and shadow as expressive as anything shot by a cameraman.

What Is Ray Tracing?

The easiest way to think of ray tracing is to look around you, right now. The objects you’re seeing are illuminated by beams of light. Now turn that around and follow the path of those beams backwards from your eye to the objects that light interacts with. That’s ray tracing.

Historically, though, computer hardware hasn’t been fast enough to use these techniques in real time, such as in video games. Moviemakers can take as long as they like to render a single frame, so they do it offline in render farms. Video games have only a fraction of a second. As a result, most real-time graphics rely on another technique, rasterization.

Literally cinematic: if you’ve been to the movies lately, you’ve seen ray tracing in action.
If you’ve been to the movies lately, you’ve seen ray tracing in action.

What Is Rasterization?

Real-time computer graphics have long used a technique called “rasterization” to display three-dimensional objects on a two-dimensional screen. It’s fast. And, the results have gotten very good, even if it’s still not always as good as what ray tracing can do.

With rasterization, objects on the screen are created from a mesh of virtual triangles, or polygons, that create 3D models of objects. In this virtual mesh, the corners of each triangle — known as vertices — intersect with the vertices of other triangles of different sizes and shapes. A lot of information is associated with each vertex, including its position in space, as well as information about color, texture and its “normal,” which is used to determine the way the surface of an object is facing.

Computers then convert the triangles of the 3D models into pixels, or dots, on a 2D screen. Each pixel can be assigned an initial color value from the data stored in the triangle vertices.

Further pixel processing or “shading,” including changing pixel color based on how lights in the scene hit the pixel, and applying one or more textures to the pixel, combine to generate the final color applied to a pixel.

This is computationally intensive. There can be millions of polygons used for all the object models in a scene, and roughly 8 million pixels in a 4K display. And each frame, or image, displayed on a screen is typically refreshed 30 to 90 times each second on the display.

Additionally, memory buffers, a bit of temporary space set aside to speed things along, are used to render upcoming frames in advance before they’re displayed on screen. A depth or “z-buffer” is also used to store pixel depth information to ensure front-most objects at a pixel’s x-y screen location are displayed on-screen, and objects behind the front-most object remain hidden.

This is why modern, graphically rich computer games rely on powerful GPUs.

Ray Tracing Explained

Ray tracing is different. In the real-world, the 3D objects we see are illuminated by light sources, and photons can bounce from one object to another before reaching the viewer’s eyes.

Light may be blocked by some objects, creating shadows. Or light may reflect from one object to another, such as when we see the images of one object reflected in the surface of another. And then there are refractions — when light changes as it passes through transparent or semi-transparent objects, like glass or water.

Ray tracing captures those effects by working back from our eye (or view camera) — a technique that was first described by IBM’s Arthur Appel, in 1969, in “Some Techniques for Shading Machine Renderings of Solids.” It traces the path of a light ray through each pixel on a 2D viewing surface out into a 3D model of the scene.

The next major breakthrough came a decade later. In a 1979 paper, “An Improved Illumination Model for Shaded Display,” Turner Whitted, now with NVIDIA Research, showed how to capture reflection, shadows and refraction.

Turner Whitted’s 1979 paper jump started a ray tracing renaissance that has remade movies.
Turner Whitted’s 1979 paper jump started a ray tracing renaissance that has remade movies.

With Whitted’s technique, when a ray encounters an object in the scene, the color and lighting information at the point of impact on the object’s surface contributes to the pixel color and illumination level. If the ray bounces off or travels through the surfaces of different objects before reaching the light source, the color and lighting information from all those objects can contribute to the final pixel color.

Another pair of papers in the 1980s laid the rest of the intellectual foundation for the computer graphics revolution that upended the way movies are made.

In 1984, Lucasfilm’s Robert Cook, Thomas Porter and Loren Carpenter detailed how ray tracing could incorporate a number of common filmmaking techniques — including motion blur, depth of field, penumbras, translucency and fuzzy reflections — that could, until then, only be created with cameras.

Two years later, CalTech professor Jim Kajiya’s paper, “The Rendering Equation,” finished the job of mapping the way computer graphics were generated to physics to better represent the way light scatters throughout a scene.

Combine this research with modern GPUs, and the results are computer-generated images that capture shadows, reflections and refractions in ways that can be indistinguishable from photographs or video of the real world. That realism is why ray tracing has gone on to conquer modern moviemaking.

Light, shadow, reflection: This computer-generated image, created by XXX using YYY, shows ray traced glass distortion in the light fixture, diffuse lighting in the window, and frosted glass in the lantern on the floor reflection on the frame picture.
This computer-generated image, created by Enrico Cerica using OctaneRender, shows ray traced glass distortion in the light fixture, diffuse lighting in the window and frosted glass in the lantern on the floor reflection on the frame picture.

It’s also very computationally intensive. That’s why movie makers rely on vast numbers of servers, or rendering farms. And it can take days, even weeks, to render complex special effects.

To be sure, many factors contribute to the overall graphics quality and performance of ray tracing. In fact, because ray tracing is so computationally intensive, it’s often used for rendering those areas or objects in a scene that benefit the most in visual quality and realism from the technique, while the rest of the scene is rendered using rasterization. Rasterization can still deliver excellent graphics quality.

What’s Next for Ray Tracing?

As GPUs continue to grow more powerful, putting ray tracing to work for ever more people is the next logical step. For example, armed with ray-tracing tools such as Arnold from Autodesk, V-Ray from Chaos Group or Pixar’s Renderman — and powerful GPUs — product designers and architects use ray tracing to generate photorealistic mockups of their products in seconds, letting them collaborate better and skip expensive prototyping.

Ray tracing has proven itself to architects and lighting designers, who are using its capabilities to model how light interacts with their designs.
Ray tracing has proven itself to architects and lighting designers, who are using its capabilities to model how light interacts with their designs.

As GPUs offer ever more computing power, video games are the next frontier for this technology. On Monday, NVIDIA announced NVIDIA RTX, a ray-tracing technology that brings real-time, movie-quality rendering to game developers. It’s the result of a decade of work in computer graphics algorithms and GPU architectures.

It consists of a ray-tracing engine running on NVIDIA Volta architecture GPUs. It’s designed to support ray tracing through a variety of interfaces. NVIDIA partnered with Microsoft to enable full RTX support via Microsoft’s new DirectX Raytracing (DXR) API.

And to help game developers take advantage of these capabilities, NVIDIA also announced the GameWorks SDK will add a ray tracing denoiser module. The updated GameWorks SDK, coming soon, includes ray-traced area shadows and ray-traced glossy reflections.

All of this will give game developers, and others, the ability to bring ray-tracing techniques to their work to create more realistic reflections, shadows and refractions. As a result, the games you enjoy at home will get more of the cinematic qualities of a Hollywood blockbuster.

The downside: You’ll have to make your own popcorn.

Check out “Physically Based Rendering: From Theory to Implementation,” by Matt Phar, Wenzel Jakob and Greg Humphreys. It offers both mathematical theories and practical techniques for putting modern photorealistic rendering to work.

And learn about the key concepts of ray tracing in the on-demand webinar, Ray-Tracing Essentials.