• No results found

5 | Applications of Frequency Anal- Anal-ysis of Light Transport

5.3 Participating media

(a) Our (b) PPM (c) L2 Comparison

Figure 5.17– We compare the convergence for the caustic scene using the L2 norm for two equal time rendering. While the caustic region exhibits the same amount of noise, the diffuse parts are converging faster.

5.3 Participating media

In this section, we present results for the application of our covariance analysis for the integration of volumetric effects in the case of participating media (e.g., fog, clouds). First, we will present a data structure that reduces the cost of covariance tracing by reusing covariance information from previous traced light-paths: the covariance grid (Section 5.3.1). Then, we present three use of the covariance grid: adaptive sampling and filtering in image space (Section 5.3.2), adaptive sampling along a ray for one scatter illumination (Section 5.3.3), and an improvement of the progressive photon beams algorithm [90] (Section 5.3.4).

5.3.1 Covariance grid

The covariance grid is a voxel grid that stores covariance information of the fluence (energy per unit volume) inside a participating medium. We do not store covariance information of radiance since this latter quantity is directional and would require a quantization of directions. Such quantization would dra-matically increase the memory footprint of the covariance grid, make it less usable. Instead, we propose to store 3D covariance of fluence.

5.3.1.1 Definition

In each voxelpof the grid, we store a 3×3covariance matrixp, where entry (i, j) is the ij-covariance of the Fourier transform of the local fluence in the neighborhood ofp:

(p)ij =

ω

ωiωjF[ I]

(ω)dω (5.12)

The fluenceIis defined as:

I(p) =

dS2

l(x, ⃗d)dd⃗

5.3. PARTICIPATING MEDIA

5.3.1.2 From Covariance of Radiance to Covariance of Irradiance For a light-path and a position on this light-path, we can compute the covari-ance of the local radicovari-ance. We show here how from a covaricovari-ance matrix of radiance we can obtain the spatial matrix of fluence.

Let be the covariance matrix at position p of a light-path l. We can compute the 2D covariance of fluence on the orthogonal plane to the central ray by looking at the integration over angles of the radiance, which is in the Fourier domain a slice of the spectrum.

xloc,⃗yloc=|1,2 (5.13) We assume the local fluence to be constant along⃗zloc, as infinitesimal anal-ysis allows to do (the shear due to travel will be second order in this case).

The final 3D local covariance matrix of the fluence’s spectrum is then:

loc=

[|1,2 0

0T 0 ]

(5.14) We note0 and0T the null column and row vectors used to complete the 2×2covariance matrix.

Note that this covariance matrix is defined in a local frame(⃗xloc, ⃗yloc, ⃗zloc).

We rotate it to express the local covariance in the global frame(⃗x, ⃗y, ⃗z).

5.3.1.3 Accumulating Local Irradiance’s Covariance Matrices The last step is to accumulate different covariance matrices of the local fluence to get an estimate of the global covariance matrix of fluence. This is possible as the covariance is a property of an incoming light-path and thus can be integrated over the light-path space.

p= ∑

dS2

wdp(d)⃗ (5.15)

Wherewdis the light intensity weight.

5.3.1.4 An example: A caustic from a glass sphere

The covariance grid stores the spatial covariance of fluence. As such, directional high frequency regions are not represented in the orthogonal plane, but in the global plane. As an example, Figure 5.18 shows the equivalent Gaussian of the spatial covariance for a selection of point in a caustic created by a glass sphere lit by a spot light.

5.3.2 Image Space Adaptive Sampling and Reconstruction using the Covariance Grid

In this application, we accumulate covariance matrices on the image plane by ray marching the covariance grid. The 3D covariance matrices are sliced to extract a 2D covariance matrix in the local frame of the image. We add the eye path attenuation and occlusion spatial covariance matrix before accumulating on the screen.

5.3. PARTICIPATING MEDIA

Figure 5.18– A caustic is created using a sphere and a spot light. We analyze three positions in the covariance grid and show the equivalent Gaussians on the right. The upper figure is a diffuse part of the scene, it displays the minimum covariance due to the grid analysis. The middle inset shows the spatial covari-ance of a shadow. The elongated peak is in the direction of the visibility. Last is a point inside the caustic. The caustic is a high frequency region where differ-ent directions will accumulate high frequency. As such, the equivaldiffer-ent Gaussian expands in all three directions.

Figure 5.19– We accumulate spatial covariance matrices from the covariance grid by ray marching. In each voxel, we slice the covariance of the fluence’s spectrum and add the eye path attenuation and occlusion to it. The resulting 2D matrices are averaged using intensity weights.

We illustrate the accumulation using Figure 5.19. Given a camera shooting eye rays with starting positionpand directiond, we construct the accumulated covariance matrix i,j with a weighted average of slices of spatial covariance matrices along the rayc+td|x,y, t∈[0, dhit] to which we add the covariance of the attenuation to the eyeA(t).

The Figure 5.20 shows how the effect of a shaft is handled by our adaptive method. Our method adapts the samples to favor crepuscular regions (such as the shaft of the sphere). The border of the spot light creates a high frequency region.

5.3.3 Adaptive sampling along a ray

To integrate the effect of the participating media into a ray tracer, we need to add another integral dimension per ray bounce (counting eye rays as a ray "bounce"). This integral accounts for the scattering of light from the

5.3. PARTICIPATING MEDIA

(a) Reconstruction (b) Estimated filters (c) Estimated density Figure 5.20 – A shaft created by a sphere lit by a spot light in an inhomo-geneous volume with an exponentially varying density along the height. The covariance grid’s size is323.

scene sources along the ray. It is usually done by ray marching along the ray, connecting the sampled position on the ray to light sources and adding this contribution to the integral with correct attenuation factors.

We use frequency information to drive the sampling of positions along the ray. We illustrate our method by the Figure 5.22. This method is similar to the method proposed by Engelhardt and Dachsbacher [52] where god rays are adaptively sampled with respect to the integrand discontinuity. Instead, we look at the variance along the ray. This way we capture the discontinuity due to occlusion (as it generates high variance spectra), we capture the variation due to changing density and other effects such as convergence of photons in the case of a caustic.

Figure 5.21– We perform adaptive sampling along the eye-ray by resampling regions of high variance. In a first pass, we estimate the covariance matrix and scattered radiance for a sparse set of samples. Then, from the covariance matrix, we estimate the variance of the incoming radiance along the eye-ray to resample regions with high variation.

Since our algorithm takes advantage of adaptive sampling on the shadows

5.3. PARTICIPATING MEDIA

boundaries, we are able to reduce the aliasing of light shafts caused by under-sampling the high frequency regions that would occurs if we were not adapting the samples along the eye ray. Figure 5.22 shows our results on the Sibenik cathedral model where the rose windows are used to cast highly varying shafts due to the fine geometry of the rose.

Adaptive sampling along eye path Total number of samples per pixel Figure 5.22– We present the result of integrating the shaft casted by the rose windows in the Sibenik cathedral (modeled by Marko Dabrovic). We also show the total number of samples used per pixel for our algorithm.

Our adaptive sampling strategy is based on the variance of the illumination.

Traditional algorithms [190] are based on the maximum variance of the density in the medium along a light path. Therefore we avoid oversampling regions with too-low energy.

5.3.4 Frequency Progressive Photon Beams

We build upon the existing work of Jarosz et al.’sprogressive photon beams[90]

(referred as PPB) to illustrate the benefits of the frequency analysis. In the progressive photon beam algorithm, photons are traced in the scene containing a participating medium and the paths of propagation (called beams) are stored.

Then, rays are shot from the camera for each pixel, and the density of beams along the ray is estimated using a 1D kernel (Figure 5.23). This is repeated while decreasing kernel size until convergence is satisfactory.

5.3.4.1 Gathering photon beams

During the gathering pass, for each eye ray, we test its distancedto the beams stored (See Figure 5.23). At the closest point to each beam along the ray, we look into the covariance matrix, and estimate the ideal gathering radius rσ using the error formula (Equation 5.11) but in a 1D setting. We gather that beam only if:

d <max(ri, rσ)

5.3. PARTICIPATING MEDIA

Figure 5.23 – Given a camera ray (in green), and a beam, we use the ra-dius, rσ, estimated by the covariance analysis, instead of the radius, ri of the progressive photon mapping when ri is smaller. The effect is to gather more beams in low frequency regions to decrease the variance of the estimate in those regions.

Where, ri is the radius given by the photon beam method for pass #i. In other words, we replace the gathering radius of progressive photon mapping by a specific radius for each pair (eye-ray, photon beam) adapted to the local variations of the signal. This adaptive radius computation prevents us from decreasing the radius in regions of low bandwidth, and therefore reduces vari-ance, while controlling the bias. We implemented this process in CUDA, which allows us to compare our results to the implementation of PPB by Jarosz et al.

[90].

We validate our covariance computation step using classical test scenes such as a caustic produced by a glass sphere (Figure 5.25) and the soccer boy (Fig-ure 5.24) to illustrate particular high frequency setups such as occlusion or light concentration. In both cases, our covariance framework correctly estimates the high frequency regions. Note that we do not follow specular eye path to gather beam, this explains the inner look of the soccer boy.

At equal computation time, we achieve a much better convergence in smoother regions of the image, while we keep the equal convergence in high frequency regions such as the caustic.

5.3.4.2 Discussion

We do not need to precompute the covariance grid. As for the frequency pro-gressive photon mapping algorithm, we update it while tracing photon beams.

For each photon tracing pass, we allocate a fixed proportion of photons to carry a covariance matrix (For our scenes, we chose10%percent as the proportion).

The matrix is updated as the photon is reflected, refracted, and scattered and the grid is updated using ray marching.

The main strength of the covariance estimation of kernel radii is to stop the radius reduction when it passes below a frequency criterion. This allows the estimate to converge faster in low-frequency regions. For high frequency region, our estimation do not blur the beams resulting in the same convergence than the classical progressive photon mapping algorithm.

5.3. PARTICIPATING MEDIA

(a) Our method, 740s (b) Equal time PPB

Figure 5.24– Complicated caustics are produced by a glass soccer boy figurine (From Sun et al. [169]). Progressive photon beam is very effective to produce converged volumetric caustics but is rather slow at generating smooth diffusion.

Our covariance grid algorithm is able to diffuse more beams in low frequency parts of the screen, allowing to meet the stop criterion sooner there.

(a) Our method, 342 (b) Equal time PPB

Figure 5.25– A glass sphere is lit by a diffuse point light. This setup creates a caustic in the axis of the light and sphere centers.

Keeping a large radius slows down the selection process, when an accel-eration structure such as a KD-tree [169] used. As the areas are bigger, it increases the branching factor of the KD-tree search. In our CUDA implemen-tation which follows the method described by Jarosz et al., there is no penalty as there is no acceleration structure.

6 | Conclusion

T

his dissertation explored a way to optimize the generation of synthetic photo-realistic images. From a set of various light phenomenons (e.g., depth-of-field, motion blur, scattering in fog), we derived the Fourier transform of the mathematical formulation using the framework proposed by Durand et al.

[47]. We showed how to depict blurry regions and adapt the calculation for various light transport algorithms in order to obtain faster convergences. In this chapter, we review the presented contributions and propose future directions of research.