• No results found

2.2 Lighting simulation

2.2.2 Ray Tracing

In [Whi80] it was introduced the ray tracing technique. From the eye, so for each pixel of the image on the viewport, a ray is traced through the scene.

The ray intersection point is calculated with theray casting algorithm. When an intersection point is found, three new rays could be considered. First, the reflected ray, that is traced again through the reflected direction, taking into account the BRDF. Second, the refracted ray, tracing the incoming ray through a translucent or transparent material, taking into account the BTDF.

Third, the shadow ray, where the current point is checked in order to know if it is in shadow or not, tracing the ray until reach the light sources. If any ray

intersects first with an opaque object, the current point is in shadow. The three rays and the current point material properties are used to calculate the shading for this point, and the final color for the pixel image. This iterative process stops when no more ray intersects with geometry, or when a maximum number of bounces is reached.

The ray tracing is a view dependent algorithm, so if the eye position or view direction changes, the scene must be rendered again. This is necessary to get realistic details such as reflections or specular effects. Also, the quality of the results depends on the image resolution, but more rays are traced into the scene if not enough rays are used. This can produce aliasing effects. To solve it, multiple rays are sampled for each pixel. In [CPC84] it was presented some improvements to get more realistic effects, such as motion blur, depth of field, shadow penumbra and fuzzy reflections.

A variation of ray tracing, taken from another point of view, is light ray tracing. This method, first presented in [Arv86], computes the light rays from the light sources through the scene, until no more geometry is intersected (see Figure 2.3). The light rays are sampled from the light source distribution description using Monte Carlo methods (see next section). Furthermore, shadow rays are not used, and the technique is view independent. This method is usefull when it is required to obtain the outgoing light distribution on a scene, like for example reflector lighting. The number of light rays traced will depend on desired light distribution quality.

Figure 2.3: Light ray tracing description.

Monte Carlo methods

In [Whi80] and [CPC84] it was introduced the idea of stochastic methods to sample view rays or light sources. This led to the rendering equation [Kaj86] and the first unbiased Monte Carlo transport algorithm, calledPath Tracing. The main difference with ray tracing is the reflected ray generation, that is sampled stochastically on the hemisphere over the intersected point.

Also, the shadow ray is not traced. The idea is to bounce recursively the ray through the scene until it reaches any light source. This path is commonly known as random walk The main drawback of this method is the need to trace a large amount of rays per pixel to get enough sampled paths, and consequently accurate results.

To improve it, in [LW93] and [VG994] it was presentedBidirectional Path Tracing. Two random walks are traced, one from eye view, like in path trac-ing, and the another one from each light source. The light ray is also sampled by Monte Carlo methods. Then, all hit points for the respective paths are connected together using shadow rays, and the appropriate contributions are added to final result.

In [VG97] it was presented the Metropolis Light Transport algorithm, based on a variation of Monte Carlo Metropolis. This method improves bidirectional path tracing for effects such as caustics, or cases when there is concentrated indirect lighting. From the traced light rays, each path is mutated inserting new points in the path and new propagation directions.

The mutations are done by sampling a probabilistic function based on how the light arrives to the eye.

Photon Mapping was presented in [Jen96] and [Jen01]. This method im-proves the previous bidirectional path tracing and metropolis light transport algorithms in the way of obtaining realistic results with complex effects, such as realistic caustics, diffuse interreflections, or subsurface scattering. In a first step, the algorithm traces the photons trough the scene. The photon hits are stored in a KD-Tree (see Section 2.2.2), also called Photon Map. Also, the absorbed energy by the surface, that is the indirect illumination, is specified by a stochastic choice, where the photon energy is absorbed, reflected or re-fracted based on the surface material properties. Another map, the Caustic Map, stores the photons whose previous intersection is on a specular surface.

The caustic map is a subset of the photon map, but it has to store more sam-ples to get accurate caustics results. In the next step, view rays are traced through the scene. When a hit succeeds, a density estimator gets the radi-ance for the intersected point from the interpolation of the nearest photons on the photon map (see Figure 2.4). This radiance is accounted for together with the direct illumination, obtained by tracing a ray to the light sources;

the specular lighting, obtained using the classic ray tracing; and caustics, calculated from the caustics map and using also a density estimator. Unlike previous methods, this is a biased method, which means that results could be incorrect. But this is usually solved by increasing the number of sampled photons.

Figure 2.4: Photon Map (left) and Caustic Map (right) creation. The sphere (bottom) is used as density estimator over the KD-Tree that contains the maps

Acceleration structures

One of the most costly lighting simulation operation is the ray traversing trough the scene geometry, so a ray-geometry intersection check algorithm must be executed for each object. The organization of the scene geometry in triangle or quad meshes simplifies the intersection test, but it remains a slow rendering stage for large scenes. To improve it, some acceleration structures were proposed. The most basic is a regular grid, where a list with all geometry that is contained or intersected is stored . But this only reduces the problem linearly. To improve the efficiency we have to use hierarchical structures.

The Octree [Gla84] [San85] is a structure of square boxes (voxels), where each box is recursively subdivided into eight non overlapping equal boxes,

defining a tree. The octree nodes that are empty or that contain less than a specified amount of geometry, are not subdivided any more. Since the ray-box intersection is easy to compute, the ray traversal is easy too. First, the ray is tested on top level voxels. If it intersects, the next level is considered, and the intersection test is performed for the eight subboxes. This process is repeated until the box intersected is empty, or until it is a tree leaf, where the ray intersection test is calculated against the list of the stored geometry. The octree size is specified by this list size, a smaller list means a larger octree.

There are many works and octree improvements for ray traversal [ES94], such as [Sam89] that introduces the neighbor finding, [Grö93] for dynamic scenes, [WSC95] that constructs the octree in basis of an estimation of ray-geometry intersection tests, or [RUL00], that shows an interesting traversal algorithm that works in tangent space and uses a voxel encoding method that allows the traversing in a fast way.

TheBinary Space Partition (BSP) tree [FKN80] is another structure that subdivides the space into planes (in 3D space), where each one of these planes is a node in a tree, called BSP tree. Each node subdivides the space in two parts, placing the plane at the middle of the current geometry bounding box, repeating it until the subdivided geometry is small enough. The traversal is done checking in each tree level at what side of the plane the ray hits. The most difficult part of this algorithm is the BSP tree generation, so for each node we have to choose the subdivision plane, and this affects directly the ray traversal performance. Many works and improvements can be found in the literature [TN87] [Nay93]

The KD-Tree [Ben75][Ben80] could be considered as a particular case of BSP. In this case, the planes are axis aligned, so there are only three possible planes, and can be placed anywhere in the geometry bounding box.

The most usefull advantage of KD-Tree is the use of the k-nearest neighbor algorithm. From a selected point, the algorithm searches the closestk nearest neighbors climbing into the hierarchy, and checking for each adjacent node if it intersects with a virtual sphere with the same radius as the maximum neighbor distance. This is the basis of the Photon Mapping method (see Section 2.2.2) to calculate the average energy around a hit point on the Photon Map.

The Bounding Volume Hierarchy (BVH) [RW80][KK86][GS87] is a tree structure where each node is a minimum axis aligned bounding box (AABB) that encloses the geometry (see Figure 2.5). Each node is subdivided into two children nodes, that represent the bounding boxes of the subdivided geome-try. The traversal is done like in an octree, checking the ray-box intersection until the leaf is reached, or discarding the tree branch if it does not intersect.

As stated in [Hav00], KD-Trees are the best option for static scenes.

Figure 2.5: Bounding Volume Hierarchy example.

Furthermore, efficient BVH software implementations remained inferior to performance of KD-Trees [WBS06]. In general, both methods are not suit-able for dynamic scenes. The best results are obtained when a Surface Area Heuristic(SAH) [Hav00][Wal07] is used, but this implies an exhaustive anal-ysis of the scene geometry. There is an hybrid approach between the BVH and the KD-Tree, theBounding Interval Hierarchy (BIH) [WK06], that takes advantage on the best of both methods, and that is useful for dynamic scenes.