Global Illumination Ray Tracer

On this page, I’ve documented my earlier work in ray tracing, which took place during two semester-long independent studies. I’ve learned a lot about graphics and ray tracing since then, but I thought it would be worthwhile to share my documented experiences for anyone who’s curious about how to go from point A to point B with writing a program like this. Or at least you can look at the pictures…

Introduction

Figure 1: The ray tracer interface, with rendering options on the right.

This independent study consisted of research into the theory and implementation of several modern techniques in ray tracing. The motivation behind this project was that, although using ray tracing for generating images is simple and flexible, without additional features the capabilities are very limited. For example, since the ray tracing algorithm used in computer graphics (shooting rays from the eye to the scene) is “backwards” compared to the real world (rays emanate from light sources and gather in the eye), modeling phenomena such as diffuse-diffuse interreflection requires the implementation of a global illumination algorithm such as photon mapping. The features implemented over the course of the semester included:

  • Photon mapping with final gathering
  • Anti-aliasing (with jittered sampling)
  • Soft shadows and area lights (with jittered sampling)
  • Camera location/orientation/FOV control
  • Blinn-Phong shading
  • Translucent materials
  • Exposure adjustment

In the following sections some of these features and their associated implementations will be discussed.

Photon Mapping

Figure 2: Direct visualization of the primary photon map with no final gathering. Photon search radii of 15 (left), 30 (middle), and 100 (right) are shown.

Photon mapping is a two-phase algorithm to approximate global illumination. In the first phase, photons are emitted from light sources and interact with the environment, with their interactions stored in the photon map data structure. The second phase is the rendering phase, in which the contents of the photon map are used to approximate global illumination for a particular pixel. In this process, the photon map can either be visualized directly or a technique known as final gathering may be used, in which multiple emitted rays from a point sample the photon map at their intersection points.

Figure 3: Direct illumination (left) vs. photon mapping with final gathering (right).

Implementation Details

For this project, the design of the photon map data structure was based heavily on the publicly-available design by Henrik Jensen. Following its incorporation, the next steps revolved around modifying the rendering algorithm to take the photon map into account. Initially, the photon map was visualized directly (fig. 2) for testing purposes.

Once the direct photon map visualization was complete, the final gathering implementation began. The final gathering implementation (fig. 3) is simple, in that is merely samples the hemisphere above the point randomly. Moresophisticated sampling techniques such as stratifying the sampling procedure would likely reduce the noise of the final gathering. Nonetheless, typical global illumination effects such as color bleeding are noticeable, especially in the area directly under the sphere.

Anti-aliasing

Figure 4: 1 sample per pixel (left), 4 samples per pixel (middle), and 25 samples per pixel (right).

Anti-aliasing reduces the jagged appearance of edges in ray traced images. Jittered sampling can be used to perform anti-aliasing rather easily. In this technique, the pixel is divided into cells of equal size, and then the sample location is chosen randomly within the cell for each sample. Combining the sampled values into one value for the pixel can be performed by a number of means, and this program included the tent and box filters.

Implementation Details

The impact of anti-aliasing on image quality is very noticeable (fig. 4), and when examined closely the difference between the filters becomes noticeable as well. Using the tent filter tends to result in slightly sharper edges, while the box filter results in slightly more blurry edges.

Blinn-Phong Shading

Figure 5: Specular exponent of 10 (left), 50 (middle), and 100 (right).

Blinn-Phong shading is a technique to simulate the specular highlights of shiny surfaces, and is an improvement over Phong shading. The specular highlights are produced by computing a halfway vector between the viewer and the light source vectors, and then performing a dot product with this vector and the normal vector. The result is then exponentiated by an input parameter to control the size of the highlight.

Implementation Details

The Blinn-Phong shading model proved to work very well (fig. 5). Highlights from area lights were approximately the same shape as the area light being reflected.

Soft Shadows and Area Lights

Soft shadows are one of the most significant shortcomings of a basic ray tracing program. Adding soft shadows necessitates the usage of lights that have physical area, since the softness is achieved through sampling various points of the light’s area, and combining the results. One method to perform this sampling, similar to the manner described in the section about anti-aliasing, is jittered sampling. Applied to area lights, this technique breaks the light into cells and shadow rays from the point are cast to random points within each cell of the light.

Figure 6: Soft shadows with 1 (left), 4 (middle), and 16 (right) shadow rays per sample

Implementation Details

Soft shadows have proven to be a very computationally intensive feature. Removing the noise that appears from soft shadows (fig. 6) requires a large number of shadow rays. In practice, 16 shadow rays was the minimum for reducing noise to acceptable levels. There exist multiple optimization techniques to speed up this calculation, primarily through the use of dynamic sampling methods, in which a pixel is only sampled to a fine degree if large deviations exist between a few coarse samples. This method would work well in accelerating computation times for large, flat surfaces in which there are no shadows.

Motion Blur Introduction

This independent study continued on the work done last semester on ray tracing. While the work last semester covered a variety of different areas of ray tracing, this semester was more focused in that areas pursued primarily consisted of techniques for implementing different types of motion blur. Both object-based and camera-based motion blur were explored and implemented, with various parameters available to control blur amount and quality. The motivation behind this project was to have a starting point for future research into perceptual differences in motion blur types in static images and animation. The following features were explored:

  • Shutter time
  • Object velocity
  • Sampling quality
  • Jittered sampling
  • Camera location movement
  • Camera point-of-interest movement

These features will be discussed in the following sections.

Shutter Time

Figure 7: The effect of shutter time on an image from a moving camera. Shutter times of 0.1 (left), 0.5 (middle), and 1.0 seconds (right) are shown.

In order to ensure proper control over the amount of motion blur, perhaps the most important parameter is the adjustment of the camera’s shutter time (fig. 7). This parameter controls the time window used by the camera for exposure. Specifically, larger values for the shutter time will increase the time sampling window used for sending out rays from the camera, thereby capturing a larger duration of motion. Note that the shutter time and the velocity parameter mentioned in the following section both control the length of object shadows as well. This is due to the fact that shadow rays are also cast based on the time of the incoming camera ray. Therefore, a higher value of shutter time will lengthen and blur the shadow just as much as the object itself.

Object Velocity

Figure 8: The impact of object velocity on the blur amount. Velocities of 100 (left), 200 (middle), and 300 pixels/sec (right) are shown.

The shutter time is helpful for adjusting the amount of motion blur for an entire scene, but does not allow adjustment of the amount of blur for individual objects. This can be accomplished by adjusting each object’s velocity individually (fig. 8). This velocity is used in the calculation of object location for the various points in time sampled by the camera. A faster moving object will have traveled further during the shutter length, and therefore will appear more blurry.

Sampling Quality

Since motion blur in ray tracing is traditionally accomplished through the use of multiple sampling rays throughout time, the number of these rays controls the quality of the resultant blur effect (fig. 9). Higher values for the number of rays help remove noise, which improves the look of the blur. In practice, 16 time samples proved sufficient for scenes with a small amount of blur, while as many as 48 samples were necessary to adequately remove noise from scenes with fast-moving objects. The number of samples necessary increases with the magnitude of motion because the color values returned from the samples will have greater variation between them.

Figure 9: From left to right: 1, 4, 16, and 64 motion blur samples

The weight of the samples as compared to the time window is currently a uniform distribution, but a Gaussian or other filter could be used as well. Using a Gaussian filter would cause time samples closer to the center of the sampling window to be weighted more heavily, thereby decreasing the overall appearance of the blur. It will be left to future exploration to determine which weight filter for the samples yields the most perceptually realistic results.

Jittered Sampling

Figure 10: Jittered sampling (left) vs. 4 non-jittered samples (middle) and 8 non-jittered samples (right)

Jittered sampling is an essential feature for implementing motion blur in ray tracing because of the very noticeable strobing effects that occur without its use (fig. 10). Jittering the samples allows the entirety of the camera shutter length to be sampled rather than a finite number of time steps. Jittering nonetheless adds noise, but the strobing artifacts associated with non-jittered sampling are so pronounced that the noise is a necessary evil.

An important note of concern for animation purposes is the sampling noise introduced within each sampling cell, and how that noise is perceived from one frame to the next. In this implementation, the random distribution is irrespective of time and therefore is fixed across multiple frames.

Camera Location Movement

Figure 11: Camera location movement along the X-axis (left), Y-axis (middle), and Z-axis (right). Note that the point-of-interest was kept static in the middle of the scene during the camera motion.

Another type of motion blur is to change the position or orientation of the camera and sample this motion throughout time. Altering the position of the camera while keeping the point-of-interest static means that the orientation of the camera changes as well. If the position and point-of-interest move with the same rate, then the orientation of the camera remains fixed. In this manner, a great deal of control over the camera movement can be achieved. Altering just the camera location along all three axes (fig. 11) is possible.

Camera Point-Of-Interest Movement

By changing the focal point of the camera, the orientation can be changed without altering the location. Combined with the methods mentioned in the previous section, this means that all possible 6-DOF camera movements can have appropriate motion blur applied.

Figure 12: Camera point-of-interest movement along the X-axis (left), Y-axis (middle), and Z-axis (right). Note that the location of the camera was kept static during the orientation change.

Conclusions and Future Work

As mentioned earlier, the goal of this project was to build a starting point for future research into the perceptual properties of motion blur in ray tracing. This goal was met with success, given that all initially-proposed features were explored and implemented.

With this variety of features, from full camera control to sampling quality, a wide array of variables can be isolated and tested in a research setting. These results can then be analyzed to develop a rough idea of the degree of sampling necessary to produce clean-looking motion blur, both in static images and animation. Furthermore, other motion blur settings can be tested (such as the weight filter mentioned earlier) to determine the most perceptually realistic settings.