• No results found

Scope of the Thesis

In document Feature Centric Volume Visualization (sider 19-25)

This work is an attempt to strengthen feature specific volume visualization. A feature is a property or an object in a dataset which is of prime importance for analysis and decision making. The proposed techniques are designed to meet the following guidelines:

• Features of interest are identified in an application area.

• Innovative visualization algorithms are proposed.

• Parameters which require user interaction are minimized.

• Utility and usability of the presented algorithms is established.

In chapter 2 feature peeling is proposed to peel off layers of data from MRI and CT datasets to reveal underlying structures [48]. Layers are identified by casting view dependent rays and processing the encountered scalar values. The visualiza-tion of MRI data is enhanced. The technique also speeds up the exploravisualiza-tion of CT and MRI datasets. The user does not have to perform the laborious task of specifying a transfer function. Instead just two thresholds need to be specified.

The algorithm is compared with the opacity peeling technique [64]. Figure 1.7 shows an MRI head dataset with skin and bone peeled off. The brain surface is clearly visible.

3DCT datasets are often used for first part inspection of industrial components.

The inspection is typically carried out by extracting a surface of the volume data and

Figure 1.7: A rendering of a human head scanned with MRI. The skin and bone is peeled off and the brain surface is clearly visible.

Figure 1.8: An industrial specimen color coded to show differences between its 3DCT volume and the corresponding CAD model.

Scope of the Thesis 9 then comparing the extracted surface model with the CAD model of the specimen.

Fabrication artifacts are the most interesting features in such a comparison. They directly influence the production quality of an industrial component. The two step process of comparison is error prone and time consuming (chapter 3). A direct comparison between a CAD model and the corresponding 3DCT volume can be performed interactively [47]. It avoids artifacts introduced by the surface extraction process and it reduces the number of user specified parameters. Figure 1.8 shows a surface model of an industrial component. The dataset is color coded after performing a direct comparison between its 3DCT volume and the corresponding CAD model.

The surface extraction from a volumetric dataset is an error prone task. The errors are introduced either due to the extraction process or due to bias in the dataset. Surface extraction algorithms have considerably improved since the marching cubes algorithm [44] was developed in 1987. Not enough work, however, has been done to remove bias in the datasets. Bias in the scalar values might occur due to various CT artifacts and can severely affect the shape of various features in the specimen. The surface model of a specimen can be enhanced by using iso-value fields (chapter 4). The techniques presented in this thesis are useful in various other scenarios as well like smooth blending of multiple surfaces [20].

Enamel and dentine are both parts of a tooth but have different densities. In figure 1.9 iso-surfaces of enamel and dentine are both stitched together into one triangular mesh using an iso-value field.

Comparison of datasets is a usual task for professionals working with 3DCT. The comparison is performed to inspect and compare artifacts in different datasets (chap-ter 5). Artifacts are features of in(chap-terest in such an application scenario. The users want a system which is able to highlight artifacts efficiently and which can handle multiple large datasets [46, 63]. Two studies are presented to show the affect of the proposed system on the workflow of domain specialists. Four slices from different datasets are compared in figure 1.10. The color coding displays the relative difference between the scalar values of the datasets in the sectors and the reference dataset displayed in the central circle (220 kV).

Figure 1.9: Locally adaptive marching cubes is utilized to generate a surface model of a human tooth dataset.

Figure 1.10: Comparison of four 3DCT datasets. The 220 kV dataset is the reference dataset. All the other datasets are color coded by computing the local difference between their scalar values and the scalar values of the reference dataset.

Great works are performed, not by strength, but by perseverance.

Samuel Johnson

Feature Peeling 2

T

ransfer functions are used in 3D visualization to assign user defined optical properties to a volumetric dataset based on scalar values. The specification of the optical properties should be able to highlight defective tissues or features that are of interest for a particular medical or industrial study. This is a non-trivial task and often requires considerable time and expertise to achieve desired results. While one might be able to set up a system which can be reused for several patients, this is not always possible.

Magnetic Resonance Imaging (MRI) datasets are more difficult to handle than and are quite different from Computed Tomography (CT) datasets. Hounsfield numbers give a good and patient independent indication of the tissue types in CT. In contrast, the variance of tissue response between different patients in MRI datasets is too large to use pre-defined transfer functions for the detection of features. Transfer function specification has to be performed every time a new MRI dataset is generated. This fact makes transfer function specification a difficult and time consuming task.

Additionally, MRI typically contains a considerable amount of noise that makes it harder and more challenging to produce insightful visualizations. There is high frequency noise that affects the clarity of the images and there is low frequency noise that slowly changes the intensity of a signal. 3D visualization techniques are rare for MRI datasets and often medical personnel use manual exploration of the datasets through slice-based inspection (MPR).

We propose a novel rendering technique that identifies interesting features along viewing rays based on a ray profile analysis. For each particular view point, the algorithm allows the user to browse through the layered features of the dataset (see section 2.2). This technique can be used without specifying any transfer function and henceforth is suitable for time critical applications. Further, we include a de-noising step to be able to deal with MRI datasets. We successfully apply this technique to a variety of medical and synthetic datasets (section 2.4).

This work aims at similar goals as the work of Rezk-Salama and Kolb [64]. We intend to show the entire dataset to the user, in a layered manner without putting

11

effort in setting up a complicated transfer function. We detail the differences as well as the relationship to other work in the next section.

2.1 Related Work

In volume rendering, transfer function specification is the main tool for the user to define optical properties. The transfer function guides the user to detect features in a volumetric dataset. The 1D transfer function is the simplest example which maps scalar values to opacity and color. More effective, but complex transfer functions that require user training and experience have been proposed.

A number of interesting enhancements, meant to give the user insight into the data, have been developed for the 1D transfer function. Simple histograms [7], the contour spectrum [2], and Laplacian-weighted histograms [56] have been suggested. Potts and Möller [58] investigate the usage of a logarithmic scale that eases transfer function specification.

Multi-dimensional transfer functions [34] have been introduced which assign optical properties based on data values but also first and second derivative infor-mation thereof. Kindlmann and Durkin [32] automate the generation of a transfer function. They use the relationship between scalar values and their first and second order derivative to highlight boundaries in the dataset.

Kniss et al. [35] describe how probing and clipping can enhance the under-standing of the features that exist in a dataset. Patel et al. use innovative moment curves to specify feature aware transfer functions [55]. Bergner et al. [4] use a spectral representation of colors instead of the RGB model to enhance details and to allow an effective exploration of datasets. Transfer function specification is also tailored specifically to the visualization of medical datasets by making use of metadata [12].

Gao and Shen [18] efficiently extract isosurfaces for a given viewpoint. They divide the dataset into spherical partitions and store these in a binary tree designed for fast access. They propose a method for extracting a single iso-surface and thus their technique is not generally applicable to CT and MRI based medical datasets.

Höhne et al. [29] extend the Marr-Hildreth segmentation operator to 3D. They apply their technique in a view-independent manner. The work requires correction of errors through user input and produces a segmentation of the dataset as output. In our approach, we do not intend to perform segmentation but instead want to reveal features in a view-dependent manner. As we neither require human intervention nor any pre-processing of the entire dataset, our technique is interactive and we can control the level of peeling in real-time. A detailed description of volume graphics techniques and their applicability to medical visualization is given by Engel et al. [17].

While most of these techniques require user intervention, it would be preferable to cut this step from the visualization process to provide a quick insight into the dataset. This is the idea of the opacity peeling approach of Rezk-Salama and Kolb [64]. It allows a layered browsing of the dataset. The layers are defined through accumulated opacity and basically all the information in the dataset is visible. As the layers are based entirely on visibility (as opposed to features), objects of interest might be split and distributed among several layers. Instead of

Feature Peeling 13

Figure 2.1: A ray profile showing three features as prominent density peaks. Features are marked with ovals and vertical lines show the transition points. Transition points split a ray profile into different layers.

peeling different layers of opacity, we propose to do an analysis of ray profiles and split the rays not according to opacity thresholds, but rather at possible feature transition points.

In document Feature Centric Volume Visualization (sider 19-25)