nique is extremely flexible, easy to implement, and produces accurate
results. However, in a naïve implementation, it takes an extremely long
time to converge on an image with acceptably low noise, because few
of the rays sent into the scene ever reach a light in a reasonable amount
of time. The result is said to have high variance, i.e., high noise.
To reduce variance, a technique called importance sampling is employed, so that rays are more likely to “randomly” bounce into a light.
It would appear, however, that this approach would produce incorrect
results, as now more rays are hitting the light than should (results are
biased). In other words, the resulting image is brighter than it should
be. To counter this effect, the light along the rays that hit the light is
scaled with a probability factor inversely proportional to the amount of
extra rays sent in a direction of the light. Therefore, if twice the amount
of rays are being sent in a certain direction (towards a light), then each
will be weighted ½ of what they normally would, producing an unbiased image. The result is an image with considerably less variance at basically no cost. Importance sampling can be used here because a large
number of the possible angles between the incoming and outgoing ray
do not contribute a significant amount of light, as seen in the plot of the
droplet function in Figure 3. Therefore, the sample implementation
sends fewer rays in the directions that contribute less light.
Similar to the bias issue in importance sampling, the path tracing
volume integrator would be biased if rays were naïvely shot into the
scene. In order for the path tracing algorithm to be unbiased, each
path must be weighted based on the probability that it actually
occurs. Logically, because there are more possible paths when the
length of the path is long than when it is short, longer paths are less
likely to occur. To account for this, each step is calculated to have a
certain probability and is multiplied by the weight of all past probabilities, because as a ray proceeds through the volume, it becomes
less probable that the specific ray would actually contribute to the
scene. Specifically, each step is weighted by a factor of 1/(4π)S, where
s is the length of the step size. This is based on the derivation for surface path tracing found on page 746 of Pharr, Humphrys [ 4]. For path
tracing, a ray is weighted by the inverse of all possible rays that could
occur. In this case, all rays that could occur at a certain point in the
volume come from an area of 1/(surface area of a unit sphere) =
1/(4π). Because spheres of different sizes may want to be considered,
the weight is reformulated to be 1/(4π)S, based on geometry. For
example, a sphere of radius 3 would have the probability of 1/(4π) 3,
because it is essentially the probability of three unit spheres in succession. To further verify this formulation, it is also correct in the
limit; as the sphere considered becomes infinitely large, the probability of a ray coming from that distance approaches 0. In addition, as
the sphere becomes infinitely small, the probability approaches 1.
Sample Implementation: Path Tracing
In the sample implementation, the light source is an imaginary light
given as a parameter to the integrator. As an optimization, the implementation does not use the built-in PBRT light sources. The imaginary
light is specified using a 3-D origin and direction. This forms an infinite
planar light, and any path that crosses this plane is considered terminated. This is not physically correct, but it was useful for practical purposes, and the implementation could be extended to use PBRT’s light
sources, at the expense of considerably more variance. The implementation has a maximum path size given by the user in order to prevent in-
finite loops in extreme cases. Paths that reach the maximum path size
without hitting the light are ignored. The path tracing algorithm only collects paths that go from a light to the image plane. Including paths that
never hit the light would introduce bias; the image would be too dark.
In order to retain the rainbow effect, there is a parameter in the
integrator to bias the particles toward bouncing a ray directly at the
light. This is not physically correct, but is desirable, because it allows
the rainbow effect to be better controlled for artistic reasons.
Figure 5 is an image of a smoke dataset produced by Ron Fedkiw
[ 4] rendered with the light to the side, using the multiple-scattering
integrator, to show how it differs from the single-scattering integrator in Figure 6. Note: the angle of view is not wide enough to produce
rainbows, so they do not appear.
Figure 5: Multiple-scattering cloud rendering.
Figure 6: Single-scattering cloud rendering.
Unfortunately, although multiple-scattering does produce a more
cloud-like appearance, as seen in Figure 5, it takes several orders of
magnitude longer to compute images with this technique. Therefore,
it is impractical to use it to render complex scenes, such as the waterfall in Figure 4, without further optimization or advanced hardware.
An obvious extension would be to generalize the water rendering
technique described so that it produces correct results when water
molecules are extremely dense, to the point that they coalesce.
Currently, the model assumes that all particles are spherical droplets,
which is only the case with the “splashes” surrounding the waterfall
image in Figure 4. The waterfall itself and pool at the base of the fall
have to be rendered with another model.