• No results found

3.2 Related Work

3.6.5 Antialiasing Recovery

Antialiased edges cause problems in the filtering step of the original algo-rithm. Since the range buffer is supersampled stochastically, the bilateral filter often cannot find enough neighbors with similar values for pixels on antialiased edges. That is because their color is a combination of the regions adjacent to the edge and not present in the (undiscretized) signal itself. The result is that sometimes the filter cannot sufficiently smooth out noise for antialiased pixels. We present a simple edge model for the separable cross bi-lateral filter to address this issue. It is is based on recent work in antialiasing recovery by Yang et al. [150].

P0 P1 P2

Figure 3.17: Illustration of our edge model in a 2D red-green color space with two examples. Left: fully antialiased pixel; right: partially antialiased pixel. The pixel in question (here the central pixel) is projected onto the line segment defined by the gradient in color space. (In this illustration, we have only depicted two neighbors, but our implementation actually uses the Sobel operator to calculate the gradient, which takes a 33 neighborhood into account.) The distances a and b can be used to recover antialiased pixels during the filtering step.

In each one-dimensional filtering pass, we try to detect and fit a simple edge model to the center pixel. Similar to Yang et al., our edge model assumes a pixel either covers an antialiased edge connecting exactly two regions, or it is not antialiased at all. First, we apply a one-dimensional Sobel filter in the current filtering dimension to get the corresponding component of the gradient. Then we project the color of the central pixel onto the line segment defined by the gradient in RGB color space (Fig. 3.17). This yields two distances: a and b. The distance b tells us how close the central pixel is to the line segment. A small distance means the central pixel is close to being a linear combination of its neighbors and thus we can assume with high confidence it is an antialiased pixel (it is a good match to our simplified edge model). The distancea tells us how much each neighbor contributes (i.e. the coverage).

To preserve antialiasing during the filtering step, we filter with three range values: the original antialiased range value and the range values of the neigh-bors (Fig. 3.18). This can be thought of as three filter windows in the color range domain. The two contributions corresponding to the neighbors (L0, L2) are blended based on the coverage (a) to get the contribution of the edge

Pixels

Intensity

Figure 3.18: Illustration of the filtering step with antialiasing recovery. Instead of filtering only with the range value for the antialiased pixel (which will not collect any samples but the central pixel itself), we also use two range values defined by the neighbors (which will collect samples). The three results are then blended based on the recovered antialiasing information (Fig. 3.17). The dashed lines illustrate the borders and search directions of the range filter windows: the window for the left (right) neighbor extents only to the left (right) and collects only samples similar the the left (right) range value, the window for the central pixel extents in both directions and collects samples similar to the central range value.

model (Le):

Le p1ˆaqL0 aLˆ 2, (3.19)

where ˆaa{||P2P0||is the normalized coverage. Then Le is blended with the contribution of the antialiased range value (La) based on the “goodness”

of the fit (b):

LwLe p1wqLa, wgσcpbq, (3.20) wheregσcpbq exp

12σb22

c is an unnormalized Gaussian with zero mean and standard deviation σc. It turns out σc, the standard deviation of the color range kernel, is a good value to weigh the “goodness” of the fit. Equation 3.20 states that if the edge model was a good fit, the result will be a coverage-weighted combination of the regions adjacent to the edge; if the model cannot be fitted well, the original filtered value is used.

A problem of our simple edge model in combination with stochastic supersam-pling and the separable filter is that thin lines that run along the first filter di-mension may be thinned out. This is because the noise due to supersampling along these lines looks like many small edges to the algorithm (Fig. 3.19 bot-tom). However, as the noise in the range buffer disappears, the antialiasing recovery step can detect antialiased edges more reliably.

reference reference input (8 spp) orig. filter aa recovery

Figure 3.19: Antialiasing recovery during the filtering step. The original algorithm has problems smoothing out noise on antialiased edges. Top: the extension for antialiasing recovery can smooth antialiased pixels and remove the intensity spikes on the edges caused by the specular surface. Bottom:

under certain circumstances our simple edge model may let thin lines appear fainter than they actually are (details in text).

Figure 3.19 compares the original algorithm with a version that uses our ex-tension for antialiasing recovery. The specular surfaces cause some intensity spikes on the edges of the circular checkerboard pattern. The original filter cannot smooth out all of these spikes, because it cannot find enough neigh-bors with similar range buffer entries. The antialiasing recovery step reliably detects and recovers antialiased edges. The bottom row of Figure 3.19 illus-trates the problem with thin lines that run along the first filter dimension (here horizontally). Horizontal lines appear slightly thinner in the filtered version than they are in the reference image.

3.7 Conclusions

We have described a combined filtering and blending approach to reduce noise in stochastic ray tracing. The method is especially tailored to progres-sive rendering and achieves strong noise reduction right from the beginning of the rendering process. Filtering performance reflects the target application

(progressive rendering of high-quality images). Our filter is slower than most related approaches for interactive rendering, but the quality of the filtered results is better, especially for more complex scenes featuring high-frequency textures on non-diffuse surfaces and reflective/refractive objects. The biggest innovation of our approach, however, is the blending operator, that allows a user to interactively balance noise versus bias as the image is rendered. Fur-thermore, it allows the method to use a progressive filtering scheme, which hides the comparatively high filtering costs. We have also described sev-eral optimizations that improve the performance of the original method in specialized cases.

The two most pressing issues that remain for future work are the process of finding suitable parameters and reducing filtering artifacts. The method pre-sented in the following chapter addresses these shortcomings.

Appendix 3.A Theoretical Analysis of Blend