**7.5 Implementations and Discussion**

**7.6.1 Limitations**

Our VSSM method shares the same failure cases as PCSS. The PCSS method assumes that all blockers have the same depth within the filter kernel. Such a

“single blocker depth assumption” essentially flattens blockers. When the light size becomes bigger, this assumption is more likely to be violated and umbrae tend to be underestimated. Furthermore, PCSS only generates one depth map from the center of the light source. When using a single depth map to deal with blockers of a high depth range, single silhouette artifacts [Assarsson03] may appear. Actually, all the existing PCSS-based soft shadow methods [Annen08a] [Lauritzen07] share these problems. Nevertheless, in most cases, the soft shadow generated by VSSM is visually plausible and looks very similar to the ray-traced reference.

**7.7 Summary** **121**

(a) VSSM (148 fps) (b) SAVSM (183 fps)

**Figure 7.8: Shadow quality comparison of between VSSM (a) and SAT-based**
**variance shadow map (SAVSM) [Lauritzen07] (b).**

**7.7** **Summary**

*In this chapter, we have presented variance soft shadow mapping (VSSM) for*
rendering plausible soft shadow. VSSM is based on the theoretical framework of
*percentage-closer soft shadows. In order to estimate the average blocker depth for*
each scene point, a novel formula is derived for its efficient computation based on
the VSM theory. We solve the classical “non-planarity” lit problem by subdividing
the filtering kernel, which removes artifacts. As future work, we would like to
apply our kernel subdivision method to exponential shadow mapping [Annen08b],
*which is also a single-bounded pre-filterable shadow mapping method.*

**Part IV**

**Interactive Global Illumination in**

**Participating Media**

**Chapter 8** **Interactive Volume Caustics in** **Single-Scattering Media**

**8.1** **Introduction**

The GI effects increase the realism of computer generated scenes significantly.

Caustics caused by specular or refractive objects are stunning visual effects, even
*more so in participating media, where volumetric caustics can be observed, see*
Fig.1.5.

However, most existing methods for computing volumetric caustics are com-putationally expensive, preventing interactive applications from including this ap-pealing effect. In this paper we propose a novel interactive volume caustics ren-dering method for single-scattering participating media. We derive a simplified physics-based model enabling the efficient rendering of volumetric caustics in participating media exhibiting variations in scattering and absorption coefficients as well as the, potentially anisotropic, scattering phase function. We describe a practical GPU-based implementation and evaluate the technique in detail.

Our method avoids all pre-computations, enabling the interactive simulation of light interaction with fully dynamic refractive and reflective objects while main-taining good temporal coherence in animated renderings. Since large and complex scenes can be handled efficiently by our technique, the rendering of volumetric caustics in interactive applications like computer games is becoming an option.

Additionally, our method is applicable for fast preview generation of a more com-plex lighting simulation like volumetric photon mapping [Jensen98], as required e.g. for feature films and in commercial rendering packages.

In brief, we present the following contributions in this chapter:

• We derive a theoretically grounded simplified image formation model for

volume caustics in single-scattering media, and,

• Based on a reformulation of the proposed model, we develop a screen-based interactive rendering technique that uses line primitives [Kr¨uger06,Sun08]

to efficiently splat radiance contributions to image pixels.

The remainder of this chapter is organized as follows: First, Section8.2gives a short overview of our method. We then derive a simplified image formation model for volume caustics in single-scattering media, Section8.3, the implementation of which on graphics hardware is covered in Section8.4. Section8.5presents exper-imental results and comparison against existing techniques. We then conclude the paper in Section8.6and discuss avenues for future research.

**8.2** **Overview**

Image formation for volumetric caustics is an involved process. We thus start with an overview of our algorithm, Fig.8.1. The radiance estimate for an image pixel in the presence of a participating medium and specular objects can be split into three separate components:

• radiance scattered into the viewing direction by the participating medium
*for incident rays not having interacted with specular objects (A),*

• radiance scattered into the viewing direction from incident rays that expe-rienced specular reflection or refraction events prior to the scattering event (B), and

• surface radiance, possibly illuminated by a caustic (C).

Since superposition of light is linear, the final image (D) can be computed by summing the three components. For steps (A) and (C) we employ previously developed algorithms. Our focus in this paper is on the efficient computation of component (B).

On a coarse level, our technique employs the following steps: First, we render
the airlight and volumetric shadow contributions (A) using the anisotropic version
of Sun et al.’s model [Sun05] computed with the algorithm of Wyman and
Ram-sey [Wyman08a]. Second, the image with the volume caustics (B) is generated
by computing the light paths from the light source that are reflected or refracted
*at a specular object. We draw these paths as lines and directly compute *
radi-ance contributions for each of the affected pixels which are summed up over all
line segments representing light rays. The algorithm can be seen as a radiance
*splatting operation that emulates GPU ray marching. This rendering pass is our*

**8.2 Overview** **127**

**Figure 8.1: The final image (D) is composed of an airlight image with a**
**shadow volume (A), a volume caustic image (B) and the illumination of the**
**surface (C).**

*L**in* incoming radiance in caustic volume

*d*_{ab}**distance between points a and b**
*T* Fresnel transmittance (or reflectance)

**Figure 8.2: A summary of the notation used in this paper.**

main contribution and it is described in detail in the following sections. Third, we generate an image of the surface illumination and the surface caustics (C) us-ing Wyman’s hierarchical caustic map (HCM) algorithm [Wyman08b]. The final image is then the sum of these three images.