• No results found

Visualization of vector fields using Line Integral Convolution and volume rendering

N/A
N/A
Protected

Academic year: 2022

Share "Visualization of vector fields using Line Integral Convolution and volume rendering"

Copied!
72
0
0

Laster.... (Se fulltekst nå)

Fulltekst

(1)

Q =">"@"="62;</="> 47>RH @"="62TS B

FUBD>RHLBVF&/>LK

W >RHIBDFX1ZY[BD69K\BD6]47>RH

^`_acbedDfXgihjalkm)nchjopgqo

rOs

]tvuwyx(wzL{||{

(2)
(3)

I would like to thank my supervisor, Øyvind Andreassen, for his encouragement and assis- tance through this work. I would also like to thank Jan Olav Langseth, Bjørn Anders Pettersson Reif and the rest of the staff at FFIBM for their support and help, and my internal supervisor Knut Mørken for his direction and assistance to my study.

Special thanks goes to my wife Gro Bente R. Helgeland for devoting a huge amount of love and support to my study as well.

This thesis is available in different formats on the web page:

ftp://ftp.ffi.no/spub/stsk/ahe/index.html

A summary of this thesis in an online html version, including pictures and movies are also available on this web site.

Oslo, July 2002 Anders Helgeland

(4)
(5)

2 Background 4

2.1 Definitions . . . 4

2.1.1 Vector . . . 4

2.1.2 Vector field . . . 4

2.1.3 Field line and streamline . . . 5

2.1.4 Path line . . . 5

2.1.5 Streak line . . . 6

2.2 Field line integration . . . 6

2.2.1 The ODE system . . . 6

2.2.2 The grid . . . 6

2.2.3 Interpolation . . . 7

2.2.4 Numerical solution . . . 8

2.3 Visualization techniques for vector fields . . . 8

2.3.1 Hedgehogs and glyphs . . . 8

2.3.2 Curve representation . . . 9

2.3.3 Texture based techniques . . . 11

2.4 The data sets . . . 12

2.4.1 Turbulence . . . 14

2.4.2 Data file format . . . 16

3 Volume rendering 18 3.1 Transparency, opacity and alpha values . . . 18

3.2 Color mapping . . . 19

3.3 Texture mapping . . . 19

3.4 Volume rendering techniques . . . 21

3.4.1 Geometric rendering . . . 21

3.4.2 Direct volume rendering . . . 21

3.4.3 Direct volume rendering with 3D texture mapping . . . 22

3.5 VIZ . . . 24

3.6 VoluViz . . . 25

(6)

5.1.2 Sparse input texture . . . 35

5.1.3 Spot size . . . 39

5.1.4 Detail enlargement . . . 39

5.2 Seed LIC . . . 41

5.3 Aliasing . . . 44

6 Volume visualization with LIC 48 6.1 Assignment of color and opacity values . . . 48

6.2 Clipping functionality . . . 49

6.3 Halo effect . . . 51

6.3.1 Shading in volume visualization . . . 51

6.3.2 Shading with LIC . . . 52

6.4 Two fields visualization . . . 54

6.4.1 “Polkagris” visualization . . . 54

7 Summary and conclusion 62 7.1 Future work . . . 63

(7)

Visualization is a part of our everyday life, from weather maps to exciting computer graphics used by the entertainment industry. Informally, visualization is the transformation of data or information into pictures [1]. It is a tool that engages the human senses including our eyes and brain and is an effective medium for communicating complex information.

The engineering and scientific communities early employed applications of visualization.

The computers were used as a tool to simulate physical processes such as ballistic trajectories, fluid flows and structural mechanics. As the size of the computer simulations grew, it became necessary to transform the results from calculations into pictures. The large amount of data overwhelmed the ability of human perception. In fact, pictures became so important that early visualizations were created manually by plotting data. Today, we can take advantage of ad- vances in computer graphics, computer hardware and software. But, whatever the technology, the application of visualization is the same: to display the results of simulations, experiments, measured data and fantasy and to use these pictures/movies to communicate, understand and entertain [1].

In scientific visualization, the key goal is to transform data into a visual form that enables us to reveal important information about the data. Due to the use of modern visualization tech- niques we can discover details in data sets that would have remained undiscovered without its use. In this way, visualization helps us to better understand various physical phenomena. For scientists working with large digital data sets, the importance of modern visualization tech- niques can be compared with the astronomers use of telescopes.

Advances in modern supercomputers have made it possible to do bigger and more accu- rate simulations of physical phenomena of increasing complexity. The study of turbulence, which is a component of the field of fluid dynamics, is an example of an area that has been dependent on the development in computer technology. It is now generally accepted that the three-dimensional, time dependent solution of the Navier-Stokes equation describes the evolu- tion of incompressible flows. In these simulations, referred to asdirect numerical simulations (DNS), all scales of motion are resolved in both time and space. The number of grid points needed for a reasonably accurate simulation is proportional to}:~€'ƒ‚ , where}:~ is the Reynolds number [2], expressing the ratio between inertial and viscous forces1. The Reynolds number is

1Turbulence occur when„3…‡†Iˆi‰zŠ.

(8)

information.

Typically, only the effect of turbulence on quantities of engineering significance is of inter- est, such as the mean flow of a fluid or in the case of an aircraft, the drag and lift forces [2].

These averaged turbulent flows are smoother than the actual flow and drastically reduces the number of grid points necessary to simulate a field.

1.2 The problem

The enormous size of todays data sets, have led to an increasing demand for more efficient and advanced visualization tools, in order to analyze and interpret the data. Especially when visualizing vector fields in 3D, this becomes evident. Vector fields play an important role in science and engineering. They allow us to describe a wide variety of phenomena like fluid flow and electromagnetic fields. Large vector fields often exhibit quite complex structures, which can be difficult to reveal.

Making an efficient visualization of a vector field is one of the current challenges in scien- tific visualization. Traditionally, vector data has been represented by glyphs. By glyphs we are referring to any 2D or 3D geometric representation indicating vector magnitude and direction, such as an arrow or a cone [1]. More sophisticated methods include the display of field lines, stream surfaces [3] and flow volumes [4]. Large vector fields and vector fields with wide dy- namic ranges in magnitude, can be difficult to visualize effectively using the techniques above.

First, both arrows and field lines if placed densely in space, can produce cluttered and confusing images. Especially in areas of complex flow topology, for example turbulence, arrows and field lines can be difficult to interpret because of the variety of scales and structures in such flows.

Second, the limited number of field lines that can be displayed without “cluttering” the image, make the visualization dependent on the choice of “seed points”, which are the start position of the integrated lines. It is not obvious how to distribute the field lines in space without missing important details of the field. By using texture based techniques, we avoid some of the problems above. These techniques allow the generation of images with a much higher number of field lines, making the position of an individual line less important.

A powerful texture-based visualization method is the Line Integral Convolution (LIC), pro- posed by Cabral and Leedom [5]. Traditionally in LIC, a random texture is blurred along the field lines of a stationary 2D vector field, making an output texture that reveals the structure of the flow. 3D LIC volumes can be computed in the same manner as in 2D LIC, but this ap- proach leads to dense images where the inner structures of the vector field are difficult to see.

(9)

information. Fast response when changing parameters like color and opacity is also important when investigating large data sets. For a scientist not knowing which parameters that give good results, it should be possible to make adjustments without having to spend too much time. The making of a good color table is one example of this. These achievements are far from obvious, especially for big data sets. They can be as large as ‹Œ“Œ“Œ  data points. Texture based volume rendering allows some of the necessary functionality.

1.3 Organization of the thesis

Chapter 2 is a continuation of the introduction and covers some concepts concerning visualiza- tion of vector fields. This includes a discussion of various vector field visualization techniques.

Chapter 3 covers topics related to volume rendering and interactive visualization. In chapter 4, we describe the basic ideas of the Line Integral Convolution technique, while in chapters 5 and 6 we propose and study methods for achieving a more effective visualization of three-dimensional vector fields with LIC.

2Rendering is the process that generates 2D images on the computer screen.

(10)

2.1 Definitions

Visualization often deals with the time evolution of various fields defined in a three-dimensional space. A key goal in visualization is to identify and clarify certain details of motion contained in the data. Kinematics is the branch of mechanics that deals with quantities involving the de- scription of motion. It treats variables such as displacement, velocity, acceleration, deformation and rotation of objects. This section introduces some concepts that are of relevance to the visu- alization of vector fields [6], [7]. For a detailed treatment of kinematics, see the book by Kundu [6].

2.1.1 Vector

Vectors are often described as quantities having both magnitude and direction. In a Cartesian coordinate system, a three-dimensional vector” with components •–

‘9—

–˜

— –

9™

can be expressed as

”›šœ–

‘ipž

–˜ Ÿ

ž –

¢¡3—

where



,Ÿ and

¡

are the unit vectors along the three coordinate axes£ ,¤ and¥ .

2.1.2 Vector field

Avector fieldis defined by a map

¦¨§“©‡ª«

‘­¬

©®ª

— ¯R°

¬ ¦ •

¯<—%±i™²

A vector field ¦ • £

— ¤ — ¥

—i±i™

in ©  has three component scalar fields ³

‘

, ³y˜ and ³



, so that

¦

• £

— ¤ — ¥

—%±i™

šJ´µ³

‘

• £

— ¤ — ¥

—i±i™9—

³&˜]• £

— ¤ — ¥

—i±i™9—

³ 

• £

— ¤ — ¥

—i±i™i¶

(see figure 2.1). When a vector field is independent of time , it is calledstationary.

(11)

x

Figure 2.1: A vector field ¦ assigns a vector¦ •

¯3—%±i™

to each point

¯

of its domain at time

±

.

2.1.3 Field line and streamline

Field lines can be derived from any vector field, as well as flows. If¦ •

¯3—%±i™

is a vector field, a field linefor ¦ at time

±

š¸·

±

, is a curve¹$•ƒº

™

where the tangent vectors of the curve coincides with the vector field, see figure [2.2]. For our purpose the curve expressed by the field line is parameterized by the arc lengthº . The field lines can then be characterized by the equation

¹­»•,º

™ š ¦ ´

¹7•,º

™9—

·

± ¼]½

¾

¾O¿ ¹$•ƒº

™ š ¦ ´

¹$•ƒº

™9—

·

± —

(2.1) where¾ ºÀšÂÁ ¾ £ ˜

ž ¾ ¤ ˜ ž ¾ ¥ ˜

. The curve¹7•,º

™

šÃ£y•ƒº

™ipž

¤•ƒº

™Ÿ ž

¥O•ƒº

™q¡

.

Figure 2.2: Field lines for a vector field.

SubstitutingÄ in 2.1 with the velocity fieldÅ yieldsstreamlines.

2.1.4 Path line

When a vector field is considered as a velocity field, we can define thepath lineof a particleÆ in the vector field as the trajectory of motion forÆ over a period of time, see figure 2.3. A path line for a particle with initial position”RšÈÇÊÉ can be described by the relations

¯ •

±i™

šËÇ̕ ”

—%±i™

The path line is obtained by solving the equation .

Í

ÎÐÏ

¾ ¯

¾pÑTšAÒ ´ÔÓ

—%± —

•,Œ šA”

(2.2)

(12)

Figure 2.3: Pathline of a particle.

2.1.5 Streak line

Astreak lineis another concept used in flow visualization. It is defined as the current location of all fluid particles that have passed through a fixed spatial point at some previous time. It is determined by injecting dye or smoke at a fixed point during an interval of time and is often used in wind tunnel experiments. Streamline, path line and streak line are all identical in a steady flow.

In this thesis we will study vector fields at different instants of time, limiting our study to field lines.

2.2 Field line integration

2.2.1 The ODE system

A field line may be viewed as a solution of the following first order ODE (ordinary differential equation) system. Choosing a starting point”LšÕ•–

‘—

–˜

— –

9™

for the field line, we can write the equation 2.1 as

ÍÖ

ÎÖÏ £ »

•ƒº

™

šœ³

‘

•£y•ƒº

™9—

¤•ƒº

™—

¥*•ƒº

™—

·

±q™9—

¤ »

•ƒº

™

šœ³y˜]•£y•ƒº

™9—

¤•ƒº

™—

¥*•ƒº

™—

·

±q™9—

¥ »

•ƒº

™

šœ³



•£y•ƒº

™9—

¤•ƒº

™—

¥*•ƒº

™—

·

±q™9—

(2.3)

with the initial conditions

ÍÖ

ÎÖÏ

£y•,Œ

™

šœ–

‘9—

¤j•Œ

™

šœ–˜

—

¥*•,Œ

™

šœ–

²

(2.4)

2.2.2 The grid

When dealing with numerical data, the vector field is not available in analytical form. It is given numerically at discrete locations. In our case will assume that the data are given on an uniform grid.

(13)

Voxel

Figure 2.4: An uniform grid.

By an uniform grid, we mean a collection of points and cells arranged on a regular, rectan- gular lattice [1] as shown in figure 2.4. The rows, columns and planes of the lattice are parallel to the global x-y-z coordinate system. Uniform grids consist of line element(1D), pixels(2D) or voxels(3D) (see the figures 2.4 and 2.5 for illustrations of a voxel). Each pixel or voxel in an uniform grid is identical in shape. The number of points in the dataset is ×U؛Ù!×(ÚÙÛ×Ü , where •×(Ø

—

×(Ú

—

×(Ü

™

specifies the number of points in the£ , ¤ and ¥ directions. The number of cells is •×(Ø7Ýދ

™

Ùߕ ×(ÚÝȋ

™

ٜ•×(Ü:Ýˋ

™

. If the domain àJš •–Ø

—¢á Ø ™

Ùâ•,–“Ú

—¢á Ú ™

ٜ•,–Ü

—¢á Ü ™

the spacing between the grid points in each directions are given byãä£Ršå• ,

á

ØÀÝ)–“Ø

™%æ

•×(ØDÝA‹

™

,

ãç¤\šË•

á

ÚPÝ!–“Ú

™iæ

•×ÚlÝv‹

™

andã\¥7šÞ•

á

ÜèÝé–-Ü

™iæ

•×(ÜEÝv‹

™

.

2.2.3 Interpolation

When working with data in discrete form, vector values between the grid or mesh points have to be computed by interpolation. We will in this thesis assume the data to be sufficiently smooth, so the use of trilinear interpolation is accurate enough. Trilinear interpolation uses data values

x

z

y

y x z

V4

p1

V2 p2

p5 p3

(x,y,z) p4

V3 V6

p6

V5

V7

V0 V1

Figure 2.5: Trilinear interpolation.

from the 8 vertices as shown in figure [2.5] to estimate the data value at the point • £ ¤ ¥ . We

(14)

ï(•£

— ¤ — ¥ ™ š ê Ž ž ´ •ê ’ Ý ê Ž ™&ì

•,¥cÝ!¥É

™%æ ã\¥

—

where£É

—

¤ðÉ and ¥É are coordinates of the cell point ëÉ and ãç£ñšò£

‘

ÝÛ£É

—

ã礛šò¤

‘

Ý`¤ÊÉ and

ã"¥7šß¥

‘

ÝÛ¥É are the length of the voxel in each direction.

2.2.4 Numerical solution

The ODE system (2.3)-(2.4) can be solved by numerical methods. The simplest numerical scheme is Euler’s method, which is derived from using the first two terms in the Taylor series.

A point located a distanceó ahead from a point

¯

on the same field line can then be found by computing

¯

•ƒº

ž ó ™ š ¯

•ƒº

™Xž

óÄc•

¯

•ƒº

™—

·

±%™9²

(2.5) More accurate methods like the higher-order Runge-Kutta [8], can be derived by including more terms in the Taylor series. We will use a fourth-order Runge-Kutta method to compute the field lines.

2.3 Visualization techniques for vector fields

As mentioned before, vector fields are useful for describing a number of physical phenomena and there are many ways of representing them.

2.3.1 Hedgehogs and glyphs

A natural vector visualization technique is to draw an oriented, scaled line for each vector. The line is drawn, starting at a grid point and is oriented in the direction of the vector components associated with that point. The color and length of each line can be set by the vector magnitude.

This technique is often referred to as ahedgehogor oriented lines. To get a better impression of the direction of the vector field, arrowheads can be added to the lines. Any 2D or 3D geometric representation indicating vector magnitude and direction is called aglyph(see figure 2.6).

These techniques are best suited for small data sets. If the placements of the glyphs are too dense and the variations in magnitude are too big, the images tends to be “cluttered” and visually confusing.

The results can be improved if some form of thresholdingis applied. One example which can remove some of the clutter is to neglect the drawing of glyphs where the length of the

(15)

Figure 2.6: Glyphs.

vector is below a certain value, øùXøIú¸û . The threshold ü is typically a normalized quantity in the range ýЌ

—

‹zþ. If üÚZŒ , every vector is displayed. If üÚÿ‹ , only the vectors with the

largest magnitude are present in the resulting image. Another method is to scale the vectors so that the overlapping of the glyphs are reduced. In figure 2.7, we have used threshold and scale to emphasize regions where the information of the vector field is important. We see from the bottom image that suppressing a larger number of the least significant vectors may show relevant physical information more clearly.

2.3.2 Curve representation

A better way of representing the vector fields is to draw curves that reveal the orientation and structure of the field. The curves can be any of the curves defined in section 2.1, depending on what we wish to see. The lines can be colored according to vector magnitude, but also other scalar quantities such as temperature or pressure may be used to color the lines. The computation of path lines and streak lines strongly depends on the capabilities of the underlying hardware. Both these techniques are time dependent, and vector data for multiple time steps have to be stored in the computer during the calculations. The requirements in memory can quickly be of many gigabytes, and not all computers are big enough to handle that.

A possible problem concerning the rendering of field lines is the spatial perception of the objects in the scene. On common graphics workstations, field lines and other curves are dis- played using flat shaded line segments, impairing the spatial impression of the image [9]. Phong type shading models [1] are traditionally applied to surface elements, but can be generalized to line primitives in ©  [9]. Such generalizations have been used to render fur or human hair.

However, on current graphics workstations, there is no direct hardware support for the display of illuminated line primitives [9]. Therefore major parts of the illumination calculations have to be performed in software. In 1997 Stalling, Zöckler and Hege [9] presented a method to achieve fast and accurate line illumination, by exploiting texture mapping capabilities of modern graph- ics hardware. This shading technique allows the visualization of large numbers of field lines in a vector field [9].

Other ways to enhance the three-dimensional impression of the vector field are to represent the field lines by polygonal objects, for example like tubes. One of these techniques is called streamribbons. A streamribbon can be constructed by generating two adjacent field lines and then bridging the lines with a polygonal mesh. This technique works well as long as the field lines remain relatively close to another [1]. If the field lines diverge, the resulting ribbons will not accurately depict the vector field, because we expect the surface of a ribbon to be everywhere

(16)

Figure 2.7: Visualization of a vector field using glyphs. In the top image we have set the thresholdüLšâŒ

²

Œp‹ and the scale ºcš

ì

ã"ó , whereã"ó is the largest of the grid spacingsãç£ ,

ã\¤ andã"¥ . In the bottom image üIšÈŒ

²

‹ and º7š

ì

ã"ó . The valueº determine the length

of the largest glyphs.

(17)

Figure 2.8: Visualization of a vector field using field lines. The red lines are at the “downstream”

side of the seed point whereas the green ones are at the “’upstream’ side.

tangent to the vector field (i.e., definition of field line).

Astreamsurfaceis a collection of an infinite number of field lines passing through a curve.

The curve defines the starting points for the field lines and if the curve is closed, as in a circle, the surface is closed and we get astreamtube. Streamsurfaces can be computed by generating a set of field lines from selected points on the curve. A polygonal mesh is then constructed by connecting adjacent field lines. Like in streamribbons the separation of the field lines can introduce large errors into the surface.

A problem with all these techniques, with the exception of the one proposed by Stalling, Zöckler and Hege [9]1, is the limitation of the number of field lines that can be displayed in the scene, without cluttering the image. This makes the visualization dependent on the choice of seed points. As mentioned before, it is not obvious how to distribute the field lines in space without missing important details of the field. In figure 2.8, the image is a little cluttered because of the large number of field lines rendered in the vector field. As in figure 2.7, we have focused on aregion of interestby thresholding the distribution of seed points.

2.3.3 Texture based techniques

The use of texture based techniques is an alternative method for visualizing vector fields. Ex- amples of these techniques are spot noise [10], [11], illuminated field lines [9] and Line Integral Convolution [5], [12], [13]. These techniques avoid some of the problems with vector visual- ization discussed in section 1.2 and the subsections 2.3.1 and 2.3.2. Figure 2.9 shows the result after applying LIC on the same vector field as visualized with other techniques in the figures 2.7 and 2.8. The vector field is obtained from [16].

1The Fast display of illuminated field linesmethod, allows the generation of images with thousands of field lines at interactive rate [9]. This means that the positioning of an individual field line becomes less important.

(18)

Figure 2.9: Visualization of a vector field using Line Integral Convolution.

This thesis focuses on Line Integral Convolution, which will be presented more clearly in chapter 4. In two dimensions, LIC takes a bitmap (a texture) and a two-dimensional vector field as inputs and computes a new bitmap. The computed LIC texture will look like an image that is covered with spatially oriented structures along the vector field, see figure 4.6 on page 30. The advantage of this approach, as opposed to other field line techniques, is that it depicts all parts of the vector field. Line Integral Convolution evaluates the vector field at every pixel, hence it is independent on the choice of seed points. LIC is also independent of resolution2. This allows the use of textures that are larger than the grid size of the vector field, without having to resample the vector field. Thus, if the data are sufficiently smooth and the interpolation is sufficiently accurate, we can by increasing the resolution of the texture, produce more detailed images.

In three dimensions Line Integral Convolution [5], [12] leads to dense images, where the inner structure of the field can be difficult to depict (see figure 5.1, on page 34). Methods that reveal some of the inner structure will be discussed in the chapters 5 and 6. One approach, is the application of sparse input textures [13], [14], another approach is the use of clip planes.

Line Integral Convolution is a quite compute intensive technique. In 1995, Stalling and Hege [12] proposed a much faster and more accurate LIC algorithm, which made LIC a popular technique for displaying vector fields on two dimensional surfaces [15]. When LIC is used to depict a 3D flow through a volume, however, even the algorithm presented by Stalling and Hege [12] may use some time to create the 3D LIC texture. This is the main reason for proposing a fast LIC algorithm in 3D, which we have calledSeed LIC. This technique exploits the sparsity of the input texture. The discussion of Seed LIC will take place in section 5.2.

2.4 The data sets

Data used in the visualizations, in this thesis, comes from numerical simulations computed at the Norwegian Defence Research Establishment (FFI) and the Colorado Research Associates

2Stalling and Hege [12] made LIC independent of resolution. In the algorithm proposed by Cabral and Leedom [5], the vector field, the input texture and the output texture had to be of the same resolution.

(19)

Figure 2.10: Visualization of a synthetic vector field using a few field lines. The red lines are at the “downstream” side of the seed point whereas the green ones are at the “’upstream’ side.

(CoRA/NWRA). In addition we have employed a synthetic data set given by the formula

•£

— ¤ — ¥ ™ š

´XÝÛ¤

— £ — º,×<•'‹Œ

½ ¥

™i¶“— ½

Œ —

´ Œ — Œ — Œ ²‹ — ½

šœŒ

—

where

½ š Á £ ˜ ž ¤ ˜

. The data set is a vector field that rotates around a line parallel to the z axis, see figure 2.10.

These data sets have been used in the development of algorithms and to study and compare different visualizations techniques.

The data set made at FFI, was obtained from a simulation of shock waves from an explosion [16]. The problem comes from computational fluid dynamics (CFD) and was modeled by the three-dimensional Euler equation. One practical application of such problems is the study of how vorticity3produced by shock waves mix two different gases. The solution contains vortices generated by the shock waves, which can be seen by visualizing for instance the vorticity field.

The computation was performed on aðŒ“ŒçÙðŒ“ŒçÙ鋌ðŒ grid.

The data set from Colorado Research Associates was obtained from a simulation of strat- ified shear turbulence [17]. This is the 3D direct numerical simulation of highest resolution to date reported, of Kelvin-Helmholtz (KH) [18] instability. The solution offer the most accu- rate characterization of stratified turbulence present available. KH instability generates vortices or KH billows. The resulting turbulence is often found to be an efficient mixing and dissi- pating process. The simulation reveal the breakdown of a single KH billow and was solved with a pseudo-spectral Galerkin method with field variables represented by Fourier series. The spatial resolution (number of spectral modes) was varied during the time evolution, so that small-scale features always was properly represented. The data set used, involves more than

ŒðŒçٓŒ“ŒçÙ鋓ŒðŒ modes.

KH billows occur fairly frequently in the atmosphere, with wavelength up to a few kilome- ters. As they induce vertical air motion, they sometimes generate billow clouds, see figure 2.11.

KH billows occur at the interface between two fluids of different density and velocity.

3The vorticity is the curl of the velocity vector and can be written as !#"$ .

(20)

Figure 2.11: Billow clouds.

An important field of research, is the study of turbulent flows. Next, we describe some basic characteristics of turbulence, and how to visualize it.

2.4.1 Turbulence

Most flows occurring in nature and in engineering applications are turbulent. Blood moving through the heart in our body is turbulent. The flow of water in rivers and canals is turbulent.

Most combustion processes, like the mixing of fuel in an engine, involve turbulence and often depend on it. Practically all the fluid flows that interest scientist and engineers are turbulent ones.

An understanding of turbulence can for example allow engineers to reduce the aerodynamic drag on a race car or a commercial airplane, increase the maneuverability of a jet fighter or improve the efficiency of an engine.

Turbulence is not always an unfortunate phenomenon that has to be eliminated at every opportunity. In certain fields many engineers work hard trying to increase it. One example, is the introducing of dimples in a golf ball. The dimples increase the turbulence close to the surface, bringing the airstream closer to the ball, see figure 2.12. This reduces the drag of the golf ball and allows a skilled golfer to drive the ball 250 meters instead of 100 meter. Another example, is in a combustion engine, where the turbulence enhances the mixing of fuel and produces cleaner and more efficient combustion.

But what exactly is turbulence? Everyone who has seen smoke streaming upward into still air from a burning cigarette has some idea about the nature of turbulent flow. Immediately above the cigarette, the flow is smooth. Such a flow is known as laminar. A little higher up, it becomes rippled and diffusive, or in other words turbulent. The same thing can be seen with water flowing from a kitchen tap. If we open the tap just a little, the flow is smooth and transparent. Open the tap a bit further, and the flow becomes more rough and fuzzy. However, it is very difficult to give a precise definition of turbulence. All we can do is try to describe some characteristics of turbulent flows. In for instance Tennekes and Lumley [19], there is a list of such characteristics.

Turbulence is for example irregular, or “random”. Turbulence is characterized by high level of fluctuating vorticity. It is rotational and three-dimensional and always occur at high Reynolds numbers. Turbulence is composed of eddies or vortices moving randomly around and about the overall direction of motion [2]. These vortices are continually forming and breaking down.

(21)

Images: Slim Films

Figure 2.12: The drag on a golf ball is dominated by pressure forces. The drag arises when the pressure in front of the ball is significantly higher than the pressure behind the ball. The dimples of a golf ball increase the turbulence close to the surface, bringing the high speed airstream closer and increase the pressure behind the ball. The effect is plotted in the chart, which shows that the drag is much lower for the dimpled ball than for a smooth sphere, where the flow remains laminar over a great portion of the surface. The figures are taken from [2].

Large vortices break down into smaller ones, which break down into smaller vortices, and so on. The largest eddies is fed by external forcing whereas the smallest are dissipated into heat by viscous action.

Supercomputers have made it possible to simulate turbulence. Due to the huge amount of data derived from direct numerical simulations, it is a challenge to reveal the complex structures when visualizing such flows.

Visualization is a tool used to verify and interpret numerical data. Knowing little about the expected behavior of a given problem, we can get an impression whether the simulations are reasonable or not by studying plots and animations. In some applications it can be sufficient to study for example the velocity field, but when it comes to turbulent flows, the instantaneous velocity field tends to be very complex and difficult to study. The vorticity field depicts the structure in a turbulent flow better than, for instance the velocity field due to the fact that vortices are coherent on elongated structures [20]. Since vorticity dynamics plays an essential role in the description of turbulent flow, this particular choice is rather intuitive. Both vorticity (% š'& ÙEÅ ) and enstrophy (("šÂø)%›ø

˜ ), whereÅ is the velocity field, appear to render the turbulent field very nicely and reveal the vortical structure of the flow.

Once a good visual comprehension of the mean structure of the flow is achieved, we can begin searching for dynamical processes relating the structures in the solution. An active field of research in the turbulence research community, is the identification of coherent structures, or in a sense “uniform” structures. Coherent vortices can be described as regions of the flow satisfying two conditions [21]:

1. The vorticity concentration * should be high enough so that a local roll up of the sur- rounding fluid is possible.

2. They should approximately keep their shape during a time+-, long enough in front of the

(22)

Figure 2.13: Vorticity field represented as vorticity magnitude of a Kelvin Helmholtz billow in a stratified fluid.

local turnover time*/.*‘ .

Examples of criteria that have been used to investigate coherent vortices and to visualize the structures of a turbulent flow, are pressure, vorticity, enstrophy, the -method [22] and the1 - approach [21]. Figure 2.13 shows an example where the vorticity magnitude is used to render data from the simulation of stratified shear turbulence [17].

2.4.2 Data file format

Computational scientists rarely use only one computer. Typically they use one or more com- puters to do the simulations and another computer to visualize and analyze the data. Also, they may share data files with other scientist who use different machines and software. To help sci- entists reduce the time they were spending trying to convert data sets to familiar formats, some standard formats were created. Some examples are HDF, CDF, netCDF, SAIF, SDTS and HDS.

A standard format used at FFI is theHierarchical Data Format(HDF), which is a data file for- mat designed by the National Center for Supercomputing Applications (NCSA) to assist users in the storage, manipulation and access of scientific data across diverse operating systems and machines. HDF comes with a library of callable routines and a set of utility programs and tools for creating and using HDF files. It was designed to address many requirements for storing scientific data, including:

(23)

newer HDF5. HDF5 was designed to address some of the limitations of the older HDF product, which is restricted to a file size of 2 gigabyte (32 bit addressing) and does not support parallel I/O effectively.

The Line Integral Convolution application developed in conjunction with this thesis, uses HDF to read and write data. The application is implemented in the C++ programming language and uses the GUI5library Qt [24].

4HDF files areself-describing. The term “self-describing” means that, for each HDF data structure in the file, there is comprehensive information about the data and its location in the file. This information is often referred to asmetadata.

5Graphical user interface.

(24)

We begin the chapter by describing a few concepts important in volume visualization. These are transparency, color mapping and texture mapping. We then continue with a discussion of various rendering techniques and finish by presenting the two volume rendering applications used in this thesis.

3.1 Transparency, opacity and alpha values

An important concept in visualization of volumetric data is transparency or opacity. Although many visualization techniques, such as glyphs and streamtubes, involve rendering of opaque objects, there are applications that can benefit from the ability to render objects that emit light.

The internal data from a MRI (Magnetic Resonance Imaging) scan, can for instance be shown by making the skin semitransparent, see figure 3.1.

Figure 3.1: The skull of a head is emphasized by assigning low opacity to the soft tissues.

(25)

3.2 Color mapping

Color mapping is a common scalar visualization technique that maps scalar data into color values to be rendered. In color mapping, the scalar values are divided into× equal intervals and serve as indices into a lookup table, see fig 3.2. The lookup table holds an array of colors that

0

rgb

rgb rgb rgb s

max − min

1

i color

i

i i

n−1 2

s − min ) (

i = n

s > max, i = n−1 s < min, i = 0

Figure 3.2: Mapping scalars to colors via a lookup table.

can be represented for example by the RGBA(red, green, blue, alpha) [1] or the HSVA (hue, saturation, value, alpha) [1] color system. The RGBA system describes colors based on their red, green, blue and alpha intensities and is used in the raster graphics system [1]. The HSVA system, which is by scientists found to give good control over colors in scientific visualizations, represents colors based on hue, saturation, value and alpha. In this system, the hue component refers to the wavelength which enables us to distinguish one color from another. The value which also is known as the intensity component, represent how much light is in the color and saturation indicates how much of the hue is mixed into the color.

Use of colors is important in visualization and should be used to emphasize various features of the data set. However, making a good color table that communicate relevant information is a rather challenging task. “Wrong use” of colors may exaggerate unimportant details. Some advice in making a color table are given in [25]. Figure 3.3 illustrates the use of color tables in volume visualization.

3.3 Texture mapping

Geometric object are, in compute graphics, represented by polygonal primitives. In order to render a complex scene, millions of vertices have to be used to capture the details. A technique that adds detail to a scene without requiring explicit modeling the detail with polygons, istexture mapping. Texture mapping maps or pastes an image (a texture) to the surface of an object in the scene. The image is called a texture map and its individual elements are called texels.

(26)

Figure 3.3: Visualization of a sphere using the HSVA color system. The scalar data are rep- resented as a voxel set with 8-bit precision (in the range ýŒ

—

3232€þ), and serve as indices into a

lookup table. A clip plane is used to reveal the various “layers” of the sphere represented in different colors.

Texture maps can be both two- and three-dimensional. A texture may contain from one to four components. A texture with one component contains only the intensity value, and is often referred to as anintensity map. Two component texture contains information about the intensity value and the alpha value. Three component texture contains RGB values and a texture with four components contains RGBA values.

To determine how to map the texture onto the polygons, each vertex has an associatedtexture coordinate. The texture coordinate maps the vertex into the texture map. The texture map in 2D and 3D can be defined at the coordinates •54

— ù ™

and •64

— ù

—87:™

, where 4

— ù

—87

are in the range

ýŒ —

‹zþ.

Texture mapping is a hardware dependent feature and is designed to display complex scenes at real time rates. While most graphics systems have support for 2D texture hardware, some systems like the InfiniteReality [26] have support for 3D texture mapping graphics hardware.

3D textures can be used to store volumetric scalar data obtained from numerical simulations.

The scalar values in scientific visualization are often normalized and represented as a voxel set with 8-bit precision (in the range ýЌ

—

3292€þ). These values can be used as indices into a

lookuptable. In that case, texel values in the volume texture are mapped to color values to be rendered. For some graphics systems (like InfiniteReality), the color tables are implemented in texture hardware. This allows an instant update of the color and opacity in the scene after altering the lookup table. If the color tables are not supported in hardware, the textures have to be regenerated every time the color table changes.

InfiniteReality graphics systems support three basic texel sizes, which is 16-bit, 32-bit and 48-bit. Texture memory is presently a very expensive resource. To save memory, 3D textures are often represented with a depth of 16-bit. Using 16-bit textures, a texture memory of 64 MByte

(27)

3.4 Volume rendering techniques

There are many different rendering techniques which can be divided in the two groups, the geometric renderingand thedirect volume rendering.

3.4.1 Geometric rendering

In geometric rendering, which is the the most common group, geometric objects made up of points, lines and polygons, are constructed from the 3D data and then rendered. Glyphs, field lines and streamtubes are all examples of visualizations of vector data using geometric rendering techniques.

Since Line Integral Convolution maps a vector field onto a scalar field, we will focus on the rendering of scalar data. A typically way of visualizing scalar data is to display the volume by drawingisosurfaces. An isosurface or a 3D contour consist of many polygons primitives and is created by selecting a scalar value (an isovalue), resulting in a surface showing the regions of the chosen contour level. This works best for volumes with strong and obvious structures, for example in terrain visualizations and when showing the bone from an MRI scan of a part of the human body. Isosurfaces are not suited for volumes with more complex and diffuse topology like various fluids. To extract useful information, only a few surfaces can be rendered in the same scene, and it is hard to get an impression of a complex flow using a few contours only. A cloud is an example where it is difficult to give a realistic rendering using 3D contours.

Different algorithms have been proposed for efficiently reconstructing polygonal represen- tation of isosurfaces from scalar volume data [28], [29], [30], but unfortunately none of these approaches can efficiently be used in an interactive application [31]. This is due to the effort that has to be spent to fit the surface and also to the enormous amount of triangles produced. In the paper by Westermann and Ertl [31], an isosurface was reconstructed with a marching cube algorithm [28], [29] from an abdomen data set of resolution 2‹›Ù:2̋›Ù‹

. It took about half a minute to generate 1.4 million triangles. In addition comes the time involved rendering the triangle list which on a high-end graphics computer takes several seconds. Interactive ma- nipulation of the isovalue in large data sets with geometric rendering, is therefore difficult. Ertl [31] proposed a “direct” approach for rendering isosurfaces. This approach avoids polygonal representation by using 3D texture mapping and used approximately one second to render the same data set.

3.4.2 Direct volume rendering

The other main group of rendering technique is direct volume rendering, and is most common in connection with visualization of scalar data. In direct volume rendering, voxels are used as

(28)

The pixel value is computed by evaluating the voxels encountered along the ray using some

ray 1

ray 2

screen

Figure 3.4: Ray tracing.

specified function. Although ray tracing gives high quality renderings, it is seldom used in scientific visualization. The process is very compute intensive, and since it is implemented in software (as it is difficult to implement it on dedicated hardware) it does not yet provide the interactivity needed for the visualizing of large scientific data. However, interactive ray tracing is an active topic of research [35], [36] and it is believed by researchers that ray tracing still have room for performance improvements and can be able to perform interactive rendering, even on a standard PC hardware [36].

3.4.3 Direct volume rendering with 3D texture mapping

Hardware assisted volume rendering using 3D textures can provide interactive visualizations of 3D scalar fields [37], [38], and was presented by Cabral [37]. The basic idea of the 3D texture mapping approach is to use the scalar field as a 3D texture. If the texture memory is large enough, the entire volume is downloaded into the texture memory once as a preprocess.

To render the voxel set, a set of equally spaced planes (slices) parallel to the image plane are clipped against the volume (see figure 3.5). The hardware is then exploited to interpolate 3D

(29)

Figure 3.5: Volume rendering by 3D texture slicing.

texture coordinates at the polygon vertices and to reconstruct the texture samples by trilinearly interpolating within the volume. If a color table is used, the interpolated data values are passed through a lookup table that maps the values into color and opacity values. This way, graphics hardware allows fast response when modifying color and opacity. Finally, the volume is dis- played by blending the texture polygons back to front onto the viewing plane. This technique is calledvolume slicing. Due to trilinear interpolation and dedicated hardware, we are with this technique, able to produce images of high quality at an interactive rate.

The result of ray casting and volume slicing are according to SGI (Silicon Graphics) iden- tical, but there are some important differences between the two techniques in processing the volumes. First, volume slicing is faster than ray casting because computations are performed by the dedicated texture hardware, whereas ray casting computations are performed by the CPU.

Second, volume slicing reduces the volume to a series of texture mapped semitransparent poly- gons. These polygons can be merged with any other polygonal data base handed to any graphics API (for example OpenGL) for drawing.

Although 3D texture mapping is a powerful method, it strongly depends on the capabilities of the underlying hardware. In the methods used in [37], [38] the entire volume have to be stored in the texture memory. Some graphics library allow the paging of textures [27], [39], however such methods for dealing with volumes whose size exceeds physical texture memory severely hamper the interactivity of the rendering [40]. When the size of the volume data sets exceed the amount of available texture memory, the data can be split into subvolumes or bricks that are small enough to fit into memory. Each brick is then rendered separately, but since the bricks have to be reloaded for every frame, the rendering performance decreases considerably.

To reduce texture loading, Weiler [40] has proposed alevel-of-detailrepresentation of the tex- tures. In this method each brick stores approximations of the original data at coarser resolution.

These smaller bricks may be used when rendering the volume, allowing regions of interest to be displayed at higher resolution than other parts of the data set.

In this thesis we will make use of the volume renderersViz[41] andVoluViz(an application we developed using the OpenGL Volumizerer 2 [27] API). Both use a similar direct volume renderingtechnique.

The rendering algorithm in Viz starts with a voxel set, and a color table containing color and alpha entry for each of the data values associated with the voxel set. Before rendering the major axis is derived. The major axis is the coordinate axis whose direction is closest to the screen normal. After identifying the major axis, the voxel set is rendered in 3D using a set of slice planes, as in volume slicing (see figure 3.6). The difference is that in this method, the

(30)

Figure 3.6: An illustration of the Viz’s rendering process.

slices are derived from the intersection of the voxels set with a set of planes perpendicular to the major axis. Such an approach is used when restricted to 2D texture hardware. What separates viz from the 2D texture mapped volume rendering, is that viz can perform trilinear interpolation and utilizes memory better1. Finally, the slices are rendered back to front.

In OpenGL Volumizer 2, the volume is first tessellated into a tetrahedral mesh. After sorting the tetrahedral in a back to front visibility order, they are rendered separately using the volume slicing technique (see figure 3.7).

Figure 3.7: Back to front composited slices for one, three, and five tetrahedra.

3.5 VIZ

Viz is a volume renderer application previously developed at FFI. It is a highly interactive renderer when run on a system with texture hardware. Viz has many features including trilin- ear interpolation, interactive color table (both for RGBA and HSVA), picking of subsets, clip planes, blending of geometries and voxel data and offers visualization of two fields in the same scene. Viz is restricted to data sets on uniform grids.

1To avoid reloading the textures in 2D texture mapping, all slices from the three major volume orientations have to be stored in memory.

(31)

simple interface to the high-end graphics features available in infiniteReality systems (such as 3D texture mapping and texture lookup tables). Since OpenGL Volumizer 2 utilizes a tetrahedral mesh, it can handle both regular grids and unstructured meshes. OpenGL Volumizer 2 supports all SGI graphics systems with 3D texture mapping and color tables.

(32)

4.1 Introduction to Line Integral Convolution

Line Integral Convolution (LIC) is a powerful technique used to represent vector fields with high accuracy. It is a texture based technique that can be used to display both two- and three- dimensional fields. LIC is essentially a filtering technique that blurs a texture locally along a given vector field, causes voxel intensities to be highly correlated along the field lines but independent in directions perpendicular to them. It takes a pixel/voxel set and a vector field as inputs and produces a new pixel/voxel set as output, see figure 4.1.

Voxel set Vector field

Voxel set LIC

Figure 4.1: A vector field and a voxel set are inputs to the Line Integral Convolution resulting in a new voxel set.

Since introduced in 1993 by Cabral and Leedom [5], Line Integral Convolution has been an active field of research within the computer graphics and visualization community. Several re- searchers have developed the LIC algorithm further and the method has found many application

(33)

Figure 4.2: Examples of LIC images. The image on the left depicts the computed velocity field close to a racing car computed at theItalian Aerospace Research Center(CIRA). The image on the right is a picture of flowers convolved by a given 2D vector field taken from [5].

areas, ranging from computer art to scientific visualization. Two examples of LIC images are shown in figure 4.2.

4.2 Convolution

Convolution is a mathematical definition that can be applied to several areas, such as image processing, optics and signal processing. The convolution of two real functions ;íš<;3• £

™

and

= š =

• £

™

is defined as

ï(•£

™

š>;3• £

™&ì

=

• £

™ š =

•£

™&ì

;3•£

™

š?A@

. @

;3•¤

™ =

•£ÝÛ¤

™ ¾ ¤ ²

(4.1) If we convolve;3•£

™

with theDirac deltafunction

B

• £

™ š Œ • £DCšA¤

™9—

E

• £›šA¤

™9—

(4.2) we obtain

;3• £

™&ì

B

• £

™

š'?

@

. @

;3•¤

™ B

• £"ÝÛ¤

™ ¾

¤äš';3•£

™9²

(4.3) In figure 4.3, we see how a convolution with a “box” function leads to a smearing of the function

; .

Convolution is commonly used in image processing. The convolution is then typically rep- resented by a two-dimensional convolution matrixF , where the matrix elements describe the blurring effect applied to the image. The intensity of a pixelï(•êHGJI

™

in the new image is found by adding intensity values from “neighboring” pixels in the original image times the matrix element matching the position to the pixels. If for example a picture is convolved by theÙ

(34)

matrix

Fœš KL Œ Œ Œ

Œ ‹ Œ

Œ Œ Œ9MN

—

(4.4) the only pixel to contribute in the finding ofï(•êOGJI

™

, isêPGJI itself. The result after such a convolu-

tion is the original image. The intensity of a pixel after convolution can be found by

ï

GJI

šA×RQ ST)U

‘'V SWXU

‘ F TYW

ï G« T . Q 

˜8Z

I « W. V  ˜ —

(4.5) where ï G[I š ï(•ê\GJI

™

and × is a normalization constant. Figure 4.4 demonstrates the effect of convolving an image by açÙ matrix. Notice the blurring effect in the right image.

Figure 4.4: Blurring of a picture.

4.3 Convolution along a vector

Line Integral Convolution is a modification of a technique calledDDA convolution[5]. In this method, each vector in a field is used to compute a DDA line which is oriented along the vector and going in the positive and negative vector direction some distance] . A convolution is then applied to the texture along the DDA line. The input texture pixels under the convolution kernel

(35)

DDA line

Input texture

Output texture

Figure 4.5: Convolution along a vector. The pixel in the output texture is a weighted average of all the input texture pixels covered by the DDA line.

The DDA approach depicts the vector field inaccurately. It assumes that the local vector field can be approximated by a straight line. As a result, DDA convolution gives an uneven rendering, treating linear portions of the field more accurately than areas with high curvature, such as areas with small eddies or vortices. This becomes a problem in visualization of vector fields, since details in the small scale structure are lost. Line Integral Convolution solves some of this problem, as the convolution takes place along curved segments.

4.4 LIC

For a given vector field ¦ §P©  ¬ ©  , the idea of Line Integral Convolution is to blur an input texture along field lines of¦ . The LIC algorithm carries out the blurring by applying an one-dimensional convolution throughout the input texture. Each voxel in the output texture is determined by the convolution kernel and the texture voxels along the local field line indicated by the vector field. As a result, the intensity values of the output scalar field are strongly correlated along the field lines, whereas perpendicular to them almost no correlations appear.

LIC images can therefore provide a clear visual impression of the directional structure of ¦ . This is illustrated in figure 4.6.

(36)

Figure 4.6: A 2D example where line integral convolution is applied to a white noise input texture. We see how the input texture is blurred along the field lines of the vector field. The images are taken from [42].

Given a field line ¹ , Line Integral Convolution can mathematically be described by

^ • ¯ É ™

š?`_a

« V

_6b

. V c

•ƒº Ý)ºÉ

™d

•,¹$•ƒº

™%™

¾ º —

(4.6)

where ^ • Ó É

™

is the intensity for a voxel located at

¯

ÉcšÂ¹7•,ºÉ

™

. In this equation c denotes the filter kernel of length 9] and+ denotes the input texture. The curve¹7•,º

™

is parameterized by the arc-length s. The filter length or theconvolution lengthdetermine how much the texture is smeared in the direction of the vector field. With ] equal to zero, the input texture is passed through unchanged. As the value of] increases, the output texture is blurred to a greater extent.

Stalling and Hege [12] found good results by choosing the convolution length9] to be ‹

æ

‹Œ th

of the image width.

In the algorithm (for 2D) proposed by Cabral and Leedom [5], referred to asCL-LIC here- after, computation of field lines were done by a variable step Euler’s method. The local behavior of the vector field is approximated by computing a local field line that starts at the center of a pixel •£

— ¤ ™

and moves out in the “downstream” and “upstream” directions,

e

Élš •£

ž Œ ²2 — ¤ ž Œ ²2

™9—

e G š e G

.*‘jž

ë • e G

.*‘q™

ff ë • e G

.*‘q™

ff

ã"º

G

.*‘—

e »

É š e É —

e »

G š e »

G

.*‘

Ý

ë•

e »

G

.*‘

™

ff

ë•

e »

G

.*‘

™ ff ãºz»

G

.*‘

—

(4.7)

Referanser

RELATERTE DOKUMENTER

Figure 8: Our method takes as input lines specified only via their start and end points, such as the vector line draw- ing in (b), and produces a line drawing that mimics a real

• Development of a medical volume visualization tool based on a Light Field display.. Volumetric data

This work studies the performance and scalability characteristics of “hybrid” parallel programming and execution as applied to raycasting volume rendering – a staple

Figure 4: Comparison of different scalar fields of the bending energy with our characteristic scalar field:(a) underlying vector field as line integral convolution

In vector field visualization, integral lines like stream, path, or streak lines are often used to examine the behavior of steady and unsteady flows.. In 3D, however,

We have presented an algorithm for integrated surface ex- traction and computation of line integral convolution for visualization of flow fields on unstructured grids3. Using

[HKT01], who have presented a seeding strategy for electrostatic fields that extracts a sparse set of seed points using local vector field properties..

Given a query pattern, our method returns geometrically matching occurrences of similar patterns from a variety of different search domains, e.g., the same data set, a different