• No results found

Cut-Away Generation

Figure 4.11: Examples of two types of levels of sparseness: (a) based on object representations and (b) based on different rendering techniques.

4.4 Cut-Away Generation

Previous sections were discussing importance compositing techniques and the levels of sparseness schemes. Both components of importance-driven visualization have been demonstrated on local feature enhance-ment resulting in cut-away views, but are applicable for the global fea-ture enhancement as well. This section is relevant for the local feafea-ture enhancement, because it describes the cut generation used in the local feature enhancement.

The generation of the cut-away effect can be achieved in several ways.

One possibility is to generate a geometry representation of the cut-out region and perform a test whether the current ray position is inside or outside of the cut-out geometry. This can be effectively implemented by introducing acut-out depth buffer which contains the depth information

4.4 Cut-Away Generation Importance-Driven Visualization

(a)

(b)

(c)

Figure 4.12: Sparse and dense rendering styles: The occluding context object is rendered using (a) summation, (b) illustrative contour enhancement, and (c) maximum intensity projection.

69

4.4 Cut-Away Generation Importance-Driven Visualization of the cut-out geometry. During the ray traversal a cut-out test is inte-grated to estimate what level of sparseness should be applied to a par-ticular sample position.

Thecut-out depth buffer is computed from thefootprint of the focus object, i.e., those viewport pixels where the focus object projects to. The footprint contains depth information of the focus object’s last hit for each ray along the viewing direction. The cut-out depth buffer is initial-ized with this value which gives the farthest distance of the focus object to the viewer (emax). To realize a conical clipping geometry the footprint in the cut-out depth buffer is enlarged to represent the conical side-walls of the cut-out geometry. This is done by applying a 2D chamfer distance transform to the cut-out depth buffer [2]. Firstemax the maximal depth value of the cut-out depth buffer is calculated. The depth values of the conical part of the cut-outeiare calculated from the maximal depth value emax, the slope factorscof the cut-out, and the image space distance from pixelito the footprintdi as shown in Equation 4.2:

ei=emaxdi

sc

(4.2) This generation is also illustrated in Figure 4.13, where emax is repre-sented by the darkest blue color, di is the distance of a pixel from the footprint and slopesc is illustrated as a color difference. Figure 4.13 (a) shows a conical cut-away effect and Figure 4.13 (b) illustrates the corre-sponding cut-out depth buffer.

Figure 4.13: (a) Generated local feature enhancement via cut-away view and (b) the corresponding cut-out depth buffer generated from the focus object.

For hierarchically represented volumetric data such as given by an oc-tree structure, there is also another fast and approximate way to generate

4.4 Cut-Away Generation Importance-Driven Visualization the cut-out. Instead of generating the cut-out depth buffer from the foot-print of the focus object, represented by a set of voxels, it is possible to generate the buffer from the footprint of focus object’s bounding boxes.

These bounding boxes are readily available in the octree hierarchy. The last hitemax is stored in the cut-out depth buffer as in the previous case.

To avoid rectangularly shaped cut-outs the depth buffer is processed by a smoothing image processing operator. The cut-out depth buffer in this case as compared to the distance transform approach is generated faster, as it is built from higher hierarchy levels.

Generating the cut-outs provides results with a correct spatial ar-rangement of features, i.e., the focus features do not appear in front of the context ones like in case of cylindrical MImP (Section 4.2). For the realization of MImP ray samples that belong to the context object are simply skipped, until the ray reaches depth ei stored in the cut-out depth buffer. For object-space AImC, samples along the ray with a depth smaller thaneiare not skipped, but they are represented sparsely.

The side-walls need to be shaded according to the cut-out geometry.

This requires proper computation of the surface normal at the cut-out region. In the case of volumetric data the normal vector (or gradient) at particular sample position is estimated from samples from its local neighborhood. However, at the cut-out walls this normal vector will re-sult into incorrect shading. Correct shading requires the a vector perpen-dicular to the side-wall geometry. In case of very opaque representation of the volumetric data along the cut-out this will result in proper shad-ing. However in the case of semi-transparent samples at the side-wall the correct shading will be not sufficiently achieved. Therefore Weiskopf et al. [85] propose to continuously modify the normal vector values in a small regionbehind the cut-out side-walls. The normals from this region are computed as a linear combination of the cut-out normal and the nor-mal (i.e., gradient vector) estimated from the local neighborhood of the processed sample.

In the importance-driven local feature enhancement the surface nor-mals of the cut-out side walls are calculated from the cut-out depth buffer and the cut-out slopesc. For example if the viewing direction parallel to thez axis is considered, thezcomponent of the gradient is constant and corresponds to the slope factor sc. The x and y components are com-puted from the the distance vector to the footprint in the cut-out depth buffer. The distance vector is defined as a vector between the current pixel position in the cut-out depth buffer and the closest footprint pixel.

The approach is analogous to the approach of Shin and Shin [70]. The dif-ference in the rendering result is illustrated in Figure 4.14. The dataset

71

4.5 Results Importance-Driven Visualization