• No results found

3.2 Visualization

3.2.2 Direct volume rendering

Direct volume rendering is a technique where a volume of values is rendered without an intermediate step to create surface representations of parts of it, a technique which was first used for medical CT data (Drebin et al. (1988); Hohne and Bernstein (1986)). A common way to do this is raymarching; for each pixel on the screen, a “ray” is cast from the camera. This ray marches in steps through the volume, adding colors and opacity from the material as it traverses it until the summed up color is fully opaque, and then the color is rendered to the screen.

There are many ways to further improve the result, such as adding more advanced lighting techniques (Ament and Dachsbacher (2016)) or highlighting features of interest through halos (Díaz et al. (2010)) or by peeling back obscuring features (Bruckner et al. (2006)).

With all these methods, each value in the volume needs to be assigned some material properties to distinguish them. The function that maps these values to properties is called the transfer function, and in it’s most basic form it maps the value of each point in the field to a color and opacity. There are also more ad-vanced transfer functions, such as ones that provide additional material properties (Bruckner et al.(2006)), or two-dimensional transfer functions that take both the value and the derived gradient vector of the volume values as input (Ljung et al., 2016, p.682). The gradient vector lets the transfer function take some local con-text into consideration, allowing it to further emphasize features such as vessel walls or other organ boundaries characterized by a large change in value in the data. Methods such as these make it possible to more distinctly map material properties to features such a bones, vessels or organs, but take more experience and work to create.

Transfer functions are unfortunately a source of potentially misleading visual-izations. Small changes in the transfer function can result in significant changes in vessel width, making healthy vessels look pathological or vice versa (Persson et al.(2004)).Lundström et al.(2007) suggest visualizing this uncertainty through animation, by animating the transfer function and moving the window to include slightly lower and higher values. This shows a wider range of interpretations of the data, to counteract the possibility that picking a specific transfer function may introduce some errors while resolving others. This is less critical in my thesis, as we are interested in the vascular topology more so than the shape or pathology of individual vessels.

A specialized approach for vessels is proposed in VesselGlyph (Straka et al.

(2004)), which uses the extracted centerline of the vessel tree as part of the trans-fer function to determine opacity in a direct volume rendering. This lets the

vi-3.2 Visualization 13

Figure 3.3: Direct volume rendering of segmented volume data using a style transfer function based on data value and object membership. Illustration by Bruckner and Gröller (2007), Copyright © 2007 John Wiley and Sons

sualization retain the context of surrounding structure in the visualization, while also using the knowledge of where the vessels are to highlight the vessels or even hide any occluding tissue. Due to the significant workload required for centerline extraction, I did not attempt this for this study.

In the work by Joshi et al. (2008), they describe the use of polar coordinates to determine vesselness in volume data, and suggests a number of visualization techniques to enhance volume-rendered images. These include visualizing depth or shadows with colder (blue-tinted) colors, as well as surrounding halos to easier see delineations between structures. While I do not make use of polar-coordinate vesselness filters, their use of blue-tinted shadows and alternate shading techniques did inspire me to use similar techniques in my indirect volume rendering tool.

One issue with CT data with regards to volume rendering of vasculature is that

14 Related Work sometimes the features of interest are spread amongst multiple volumes. This is a result of how CT angiography scans are conducted; a contrast agent is usually injected into a vein, making them stand out against surrounding tissue in the CT data (Silverman et al. (1984)). It is common for this method to produce one volume dataset for the arteries (where veins are barely visible), and one for the veins (where the arteries are less visible). This happens because by the time the contrast agent has spread to the veins, it is diluted and less present in the arteries.

As the scans are taken with a bit of time separating them in order to catch both of these two states, the vessel positions in the scans can be offset with relation to one another as well due to patient movement and breathing. This means that in order to see both arteries and veins, it may be necessary to translate and rotate one volume to more closely overlap with the other, and then to mix the two datasets with appropriately adjusted transfer functions to see the full vascular system (Cai and Sakas (1999); Lawonn et al. (2018)).

Chapter 4

Visualizations

In this chapter, I outline some of the visualization techniques I have evaluated for use in this thesis based on existing literature regarding vascular visualization, including those implemented in the tool I have made for this purpose. There is a lack of ready-to-use visualization tools implementing advanced vessel visualization techniques, which means that time that would be required to implement them from scratch is substantial. Therefore, to limit the scope of the thesis, not all methods could be included.

4.1 Direct Volume Rendering

There are many tools available that can do direct volume visualization. Some, like MeVisLab, are very flexible but may require a high level of familiarity with the tool to assemble a high-quality visualization. I use 3D Slicer (Pieper et al.

(2004), Figure 4.1) as my direct volume visualization tool for this survey as it provides excellent transfer function presets that were easily adjusted to fit new datasets.

The downside of using a third party renderer is that some advanced volume rendering techniques may not be implemented in the chosen tool. In this case, 3D Slicer does not support chroma depth, aerial perspective shading or shad-ows. While these features would be useful and can potentially be beneficial to the clarity of the visualization, they have been left out in a concession to practi-cality. Exploring the full range of methods available would inflate the parameter space of visualizations to implement and evaluate in the study beyond what is feasible.