• No results found

6 Visualization of Laser Confocal Microscopy Datasets

Structures in the microscopic scale nerve cells, tissue and muscles, blood vessels etc. show beautiful, com-plex, and mostly still unexplored patterns usually with higher complexity than those of organs. In order to understand the spatial relationship and internal structure of such microscopic probes, tomographic series of slices are required in analogy to the tomographies used for organs and other macroscopic structures.

Laser confocal microscopy is a relatively new method allowing for a true tomographic inspection of

micro-a b left ventricle left atrium

catheter

aorta

Figure 28: Volume rendering of the intensities obtained by the back-projection technique scopic probes. The method operates according to a simple, basic principle [20].

Figure 29: Principle of Laser Confocal Microscopy

Visible or ultra violet laser emission is focused on the first confocal pinhole and then onto the specimen as a diffraction-limited light spot, see fig. 29. The primary incident light is then reflected from particular voxel elements or emitted from fluorescent molecules excited within it. Emissions from the object return along the primary laser light pathway and depart from it by lateral reflection from (or passage through, depending on the instrument) a dichroic mirror onto the second confocal pinhole. This aperture is confocal with the in-focus voxel elements in the specimen. The virtual elimination by defocusing of all distal and proximal flanking emissions at this physical point assures that the light passing onto the detector, a sensitive photodetector or

camera, is specifically derived from in-focus object voxels with a resolution, e.g. in the Leica instrument, approaching 200 to 400 nm in the x/y and z directions, respectively. In order to image the entire object, the light spot is scanned by a second mirror in the x/y plane in successive z sections by means of a precision stage motor. Rapid scanning preserves fluorescent intensity but must be reconciled with image quality. The storage, retrieval and manipulation of light intensity information from the object makes static and dynamic 3-D imaging possible.

Although not perfect, the new method shows several significant benefits as compared to the traditional pro-cedures. The most important of them are: true tomographic method, significant freedom in choosing slice thickness and size, trivial registration of slices, very fast and easy in operation, capable of acquiring in-vivo cells as well as static or dynamic structures, non-destructive. Lastly, by using different types of laser and flu-orophore materials, different spatially overlapping structures can be visualized and superimposed within the same probe.

The data acquired with Laser Confocal Microscopy (LCM) show several characteristics requiring specialized treatment in order to make the method applicable:

1. Large data size. Typical datasets have a resolution of 512264 pixels. These pixels are colored, thus a typical RGB dataset requires some 50 Mbytes of memory. Obviously, datasets of this size require efficient processing methods.

2. Low contrast, low intensity gradients, bad signal to noise ratio. These characteristics make a straight-forward segmentation between the structures of interest and the background (e.g. by using thresholding, region growing, homogeneity, color differences etc.) impossible. All the methods listed above apply more or less binary decision criteria if a pixel/voxel belongs to the structure or not. Such criteria typically fail when used with signals showing the characteristics listed above.

3. Unequal resolutions in the plane and the depth directions. Thus, a visualization method has to be able to perform with ”blocks” or unequal size lengths instead with cubic voxels. Re-sampling of the raw data to a regular cubic field will further reduce the signal quality, introduce interpolation artifacts and generate an even larger dataset, probably too large to be handled with conventional computers.

4. Regarding the quality, artifacts have to be avoided as far as possible. Introducing artifacts in an unknown structure will often have fatal effects on their interpretation, since the human observer does not always have the experience for judging the correctness or the fidelity of the presented structures. As an example, an obvious artifact caused by bas parameter settings of the software during the visualization of human anatomy (e.g. of a head) is immediately detected by the observer, since the human anatomy is well known and such artifacts are trivially detected. This is not the case when inspecting an unknown dataset.

5. Choosing the ”correct” illumination model (e.g. MIP, semi-transparent, surface etc.) has a significant im-pact on the clarity and information content of the visualization. Again, due to the lack of experience such a decision is typically much more difficulty than in the case of anatomic tomographic data.

6. The speed of visualization becomes the most crucial issue. The visualization parameters have to be ad-justed in an interactive, trial-and-error procedure. This can take a very long time if, e.g., after an adjust-ment the user has to wait for several minutes to see the new result. Furthermore, inspection of new, un-known structures require rapid changing of directions, illumination conditions, visualization models, etc..

Looping and stereo images are of enormous importance for understanding unknown, complicated spatial structures.

The main requirement here is to employ a fast volumetric method allowing interactive and intuitive parameter settings during the visualization session. Detailed results of the employed volume visualization are reported

in [80]. Figures 31 and 30 present a microscopic preparation of the tubular structure of a cat retina. The dataset consist of 33530667 voxels, each with a dimension of 0:1620:2µm. The first image presents the extra-cellular component of the blood vessel. The vessel diameter before the branch point is 19µm. The second image shows the wire-like structure of the astrocyte cytosceleton. Both datasets originate from the same probe. In all subsequent images the difference of the visualization between slicing, MIP, surface and semi-transparent methods is shown.

Figure 30: The wire-like structure of the astrocyte cytosceleton of the same probe. Resolution of 335306 67 voxels with a size of 0:1620:2µm. Upper left a single slice, right surface reconstruction; lower left MIP, right transmission illumination models

The next dataset 32 shows the complicated structure of nerve cells networks. The resolution of the dataset is with 25 Mbytes (5122100 voxels) large. As one can see on image 32 upper left, single slices are not able to provide full understanding of the complicated topology. The three other images show in much better detail the internal structure of the cell network.

Laser confocal microscopy plays a fundamental role for gathering in-vivo data about not only static, but also dynamic structures, i.e. structures existing typically only within living cells and for a very short period of time (e.g. for a few seconds). Such structures are common in several biological applications. In the case refered here we present temporary structures formed by polymerized actin, a structure necessary for cell movements.

Figures 33 and 34 demonstrate the importance of LCM data visualization for detecting unknown structures.

In this case, we studied actin filaments in Dictyostelium amoebae with time periods ranging from 10 to 100 seconds. The data resolution is 51248443 voxels = 10 Mbytes. Note the structure of the surface visible in the ”surface volume rendering” image. These structures are hardly visible and therefore difficult to detect when regarding individual slices.

7 Conclusions

Medical volume visualization has come a long way from the first experiments to the current, highly detailed

Figure 31: The extra-cellular component of a retina blood vessel of a cat. Resolution of 33530667 voxels with a size of 0:1620:2µm. Upper left a single slice, right surface reconstruction; lower left MIP, right transmission illumination models

Figure 32: The complicated topology of nerve cells networks. Resolution 5122x100 voxels = 25 MBytes.

Upper left a single slice, right surface reconstruction; lower left MIP, right transmission illumination models 3D images are not just pretty pictures, but a powerful source of information for research, education, and pa-tient care. In certain areas such as craniofacial surgery or traumatology, volume visualization is increasingly becoming part of the standard preoperative procedures. New applications such as 3 cardiology, 3D Ultra-sound and laser confocal microscopy are becomming more and more popular. Further rapid development of

Figure 33: F-actin structures in Dictyostelium amoebae, Resolution 51248443 voxels = 10 Mbytes. Up-per left a single slice, right surface reconstruction; lower left MIP, right transmission illumination models

Figure 34: F-actin structures. Resolution 51248470 voxels = 16.5 Mbytes. Upper left a single slice, right surface reconstruction; lower left MIP, right transmission illumination models

volume visualization methods is widely expected [45].

A number of problems still hinder an even broader use of volume visualization in medicine. First, and most importantly, the segmentation problem is still unsolved. It is no coincidence that volume visualization is most accepted in all areas where clinicians are interested in bone from CT. Especially for MRI, however, automatic

segmentation methods are still far from being generally applicable, while interactive procedures are much too time-consuming. As has been shown, there is research in different directions going on; in many cases, methods have already proven valuable for specific applications.

The second major problem is the design of a user interface which is suitable in a clinical environment. Cur-rently, there is still a large number of rather technical parameters for controlling segmentation, matching, shading, and so on. Acceptance in the medical community will certainly depend heavily on progress in this field.

Third, current workstations are not yet able to deliver 3D images fast enough. For the future, it is certainly desirable to interact with the workstation in real time, instead of just looking at static images or pre-calculated movies. However, with computing power further increasing, this problem will be overcome in just a few years even on low-cost platforms.

As has been shown, a number of applications based on volume visualization are becoming operational, such as surgical simulation systems and three-dimensional atlases. Another intriguing idea is to combine volume visualization with virtual reality systems, which enable the clinician to walk around or fly through a virtual patient, see fig. 35 [77], [49]. In augmented reality, images from real and virtual world are merged to guide the surgeon during an intervention [6].

Figure 35: OP 2000: The Operation Theatre of the Future

Integration of volume visualization with virtual reality and robotics towards computer-integrated surgery will certainly be a major topic in the coming decade [92, 103, 77].

Acknowledgements

Georgios Sakas wants to thank Axel Hildebrand, Stefan Großkopf, J ¨urgen J¨ager, Rainer Malkewitz and Ste-fan Walter who did a lot of work within the different sections described above, and Peter Plath and Mike Vicker from the University of Bremen for the laser confocal microscopy data. Furtherly, data have been pro-vided by Kretztechnik, Visible Human Project, Deutsche Klinik fuer EDiagnostik Wiesbaden, H.T.M. van

Figure 36: Virtual Arthroscopy: using VR and force feed-back for training surgical procedures der Voort, Department of Molecular Biology, University of Amsterdam, Herbert St ¨uttler and Martin Hoppe, Leica Lasertechnik GmbH, State Hospital of Darmstadt, Rolf Ziegler (fig. 36).

Andreas Pommert is grateful to Prof. Karl Heinz H¨ohne, to his colleagues Bernhard Pflesser, Martin Riemer, Thomas Schiemann, Rainer Schubert and Ulf Tiede for providing many of the examples and images presented here, and to the students Jan Freudenberg, Sang-Il Kim, Henning Kr¨amer, Kay Priesmeyer, Christian See-bode and Frank Wilmer for many discussions and practical assistance. Tomographic raw data were kindly provided by Siemens Medical Systems, Erlangen (figs. 3, 4, 7, 8, 12), and the National Library of Medicine (fig. 13). Applications are in cooperation with Wolf-Joachim H¨oltje, Dept. of Craniofacial Surgery (fig. 9), Werner Lierse, late head of the Dept. of Anatomy (figs. 7, 12), Ernst Richter, Dept. of Petiatrics (fig. 3), and J ¨urgen Wening, Allgemeines Krankenhaus Altona (fig. 10).

References

[1] L. Adams, W. Krybus, Dietrich Meyer-Ebrecht, R. Rueger, J. M. Gilsbach, Ralph Moesges, and G. Schloendorff.

Computer-assisted surgery. IEEE Comput. Graphics Appl., 10(3):43–51, 1990.

[2] M. E. Alder, S. T. Deahl, and S. R. Matteson. Clinical usefulness of two-dimensional reformatted and three-dimensionally rendered computerized images: Literature review and a survey of surgeons’ options. J. Oral Max-illofac. Surg., 53(4):375–386, 1995.

[3] L. K. Atata, A. P. Dhawan, J. P. Broderick, M. F. Gaskil-Shipley, A. V. Levy, and N. D. Volkow. Three-dimensional anatomical model-based segmentation of MR brain images through principal axes registration. IEEE Trans. Biomed. Eng., 42(11):1069–1078, 1995.

[4] R. S. Avila, L. M. Sobierajski, and Arie E. Kaufman. Towards a comprehensive volume visualization system. In Proc. Visualization ’92, pages 13–20, Boston, MA, 1992.

[5] Stach, K., Sakamoto, S. Baba, K. Development of an ultrasound system for 3d-reconstruction of the foetus.

volume 17, 1989.

[6] M. Bajura, Henry Fuchs, and R. Ohbuchi. Merging virtual objects with the real world: Seeing ultrasound imagery within the patient. Comput. Graphics, 26(2):203–210, 1992.

[7] W. Barrett and E. Bess. Interpolation by directed distance morphing. In R. A. Robb, editor, Visualization in Biomedical Computing 1994, Proc. SPIE 2359, pages 110–121, Rochester, MN, 1994.

[8] J. Beier. Automatische Quantifizierung von Koronarstenosen aus angiographischen R¨ontgenbildern, Fortschr.-Ber. VDI Reihe 17 Nr.95. VDI-Verlag, D¨usseldorf, 1993.

[9] James F. Blinn. Light reflection functions for simulation of clouds and dusty surfaces. Comput. Graphics, 16(3):21–29, 1982.

[10] J.D. Boissinat. Surface Reconstruction from Planar Cross-Sections. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 393 – 397, June 1985.

[11] Michael Bomans. Segmentationsverfahren zur 3D-Visualisierung von Kernspintomogrammen des Kopfes:

Evaluierung der Standardverfahren und Entwurf und Realisierung eines klinisch einsetzbaren Systems. Disser-tation, Fachbereich Informatik, Universit¨at Hamburg, 1994.

[12] Michael Bomans, Karl Heinz H¨ohne, Ulf Tiede, and Martin Riemer. 3D-segmentation of MR-images of the head for 3D-display. IEEE Trans. Med. Imaging, MI-9(2):177–183, 1990.

[13] M. E. Brummer, R. M. Mersereau, R. L. Eisner, and R. R. J. Lewine. Automatic detection of brain contours in MRI data sets. IEEE Trans. Med. Imaging, 12:153–166, 1993.

[14] J. Canny. A computational approach to edge detection. IEEE Trans. Pattern Anal. Machine Intell., PAMI-8(6):679–698, 1985.

[15] L. S. Chen, Gabor T. Herman, R. A. Reynolds, and Jayaram K. Udupa. Surface shading in the cuberille environ-ment. IEEE Comput. Graphics Appl., 5(12):33–43, 1985.

[16] Harvey E. Cline, William E. Lorensen, Ron Kikinis, and Ferenc Jolesz. Three-dimensional segmentation of MR images of the head using probability and connectivity. J. Comput. Assist. Tomogr., 14(6):1037–1045, 1990.

[17] Harvey E. Cline, William E. Lorensen, S. Ludke, C. R. Crawford, and B. C. Teeter. Two algorithms for three-dimensional reconstruction of tomograms. Med. Phys., 15(3):320–327, 1988.

[18] Harvey E. Cline, William E. Lorensen, S. P. Souza, Ferenc A. Jolesz, Ron Kikinis, Guido Gerig, and T. E.

Kennedy. 3D surface rendered MR images of the brain and its vasculature. J. Comput. Assist. Tomogr., 15(2):344–

351, 1991.

[19] D. J. David, D. C. Hemmy, and R. D. Cooter. Craniofacial Deformities: Atlas of Three-Dimensional Recon-struction from Computed Tomography. Springer-Verlag, New York, 1990.

[20] Shotton D.M. Confocal scanning optical microscopy and its applications for biological specimens. volume 94, pages 175–206, 1989.

[21] Robert A. Drebin, Loren Carpenter, and Pat Hanrahan. Volume rendering. Comput. Graphics, 22(4):65–74, 1988.

[22] Robert A. Drebin, Donna Magid, D. D. Robertson, and Elliot K. Fishman. Fidelity of three-dimensional CT imaging for detecting fracture gaps. J. Comput. Assist. Tomogr., 13(3):487–489, 1989.

[23] Hans-Heino Ehricke and G. Laub. Combined 3D-display of cerebral vasculature and neuroanatomic structures in MRI. In Karl Heinz H¨ohne, Henry Fuchs, and Stephen M. Pizer, editors, 3D-Imaging in Medicine: Algorithms, Systems, Applications, volume 60 of NATO ASI Series F, pages 229–239. Springer-Verlag, Berlin, 1990.

[24] L.A. Feldkamp, L.C.Davis, J.W. Kress. Practical Cone-Beam Algorithm. Journal of the Optical Society of Amerika A, 1(6):612–619, 1989.

[25] Elliot K. Fishman, Derek R. Ney, and Donna Magid. Three-dimensional imaging: Clinical applications in ortho-pedics. In Karl Heinz H¨ohne, Henry Fuchs, and Stephen M. Pizer, editors, 3D-Imaging in Medicine: Algorithms, Systems, Applications, volume 60 of NATO ASI Series F, pages 425–440. Springer-Verlag, Berlin, 1990.

[26] James D. Foley, Andries van Dam, Steven K. Feiner, and John F. Hughes. Computer Graphics: Principles and Practice. Addison-Wesley Publ. Comp., Reading, MA, 2. edition, 1990.

[27] Bernhard Geiger and Ron Kikinis. Simulation of endoscopy. In Nicholas Ayache, editor, Computer Vision, Virtual Reality and Robotics in Medicine, Proc. CVRMed ’95, volume 905 of Lecture Notes in Computer Science, pages 277–281. Springer-Verlag, Berlin, 1995.

[28] Guido Gerig, O. K¨ubler, Ron Kikinis, and Ferenc A. Jolesz. Nonlinear anisotropic filtering of MRI data. IEEE Trans. Med. Imaging, 11(2):221–232, 1992.

[29] Guido Gerig, J. Martin, Ron Kikinis, O. K¨ubler, M. Shenton, and Ferenc A. Jolesz. Automating segmentation of dual-echo MR head data. In Alan C. F. Colchester and D.J. Hawkes, editors, Information Processing in Medical Imaging, Proc. IPMI ’91, volume 511 of Lecture Notes in Computer Science, pages 175–187. Springer-Verlag, Berlin, 1991.

[30] S. Großkopf, A. Hildebrand. Three-Dimensional Reconstruction of Coronary Arteries from X-Ray Projections.

In P. Lanzer, M. Lipton, editor, To be published: Vascular Diagnostics: Principles and Technology. Springer-Verlag, Heidelberg, 1996.

[31] S. Haring, Max A. Viergever, and J. N. Kok. A multiscale approach to image segmentation using Kohonen net-works. In H. H. Barrett and A. F. Gmitro, editors, Information Processing in Medical Imaging, Proc. IPMI ’93, volume 687 of Lecture Notes in Computer Science, pages 212–224. Springer-Verlag, Berlin, 1993.

[32] D. C. Hemmy and P. L. Tessier. CT of dry skulls with craniofacial deformities: Accuracy of three-dimensional reconstruction. Radiology, 157(1):113–116, 1985.

[33] A. Hildebrand. Bestimmung computer-graphischer Beschreibungsattribute f¨ur reale 3D-Objekte mittels Analyse von 2D-Rasterbildern. PhD Thesis, TH Darmstadt, Darmstadt, 1996.

[34] A. Hildebrand and S. Großkopf. 3D Reconstruction of Coronary Arteries from X-Ray Projections. In Proceed-ings of the Computer Assisted Radiology CAR’95 Conference, Berlin, 1995. Springer–Verlag.

[35] W. Hiltman. Die 3d-strukturrekonstruktion aus ultraschallbildern. October 1994.

[36] Karl Heinz H¨ohne and Ralph Bernstein. Shading 3D-images from CT using gray level gradients. IEEE Trans.

Med. Imaging, MI-5(1):45–47, 1986.

[37] Karl Heinz H¨ohne, Michael Bomans, Bernhard Pflesser, Andreas Pommert, Martin Riemer, Thomas Schiemann, and Ulf Tiede. Anatomic realism comes to diagnostic imaging. Diagn. Imaging, (1):115–121, 1992.

[38] Karl Heinz H¨ohne, Michael Bomans, Andreas Pommert, Martin Riemer, Carsten Schiers, Ulf Tiede, and Gunnar Wiebecke. 3D-visualization of tomographic volume data using the generalized voxel-model. Visual Comput., 6(1):28–36, 1990.

[39] Karl Heinz H¨ohne and W. A. Hanson. Interactive 3D-segmentation of MRI and CT volumes using morphological operations. J. Comput. Assist. Tomogr., 16(2):285–294, 1992.

[40] Karl Heinz H¨ohne, Bernhard Pflesser, Andreas Pommert, Martin Riemer, Thomas Schiemann, Rainer Schubert, and Ulf Tiede. A new representation of knowledge concerning human anatomy and function. Nature Med., 1(6):506–511, 1995.

[41] A. H¨oss, J. Debus, R. Bendl, R. Engenhart-Cabillic, and Wolfgang Schlegel. Computerverfahren in der dreidi-mensionalen Strahlentherapieplanung. Radiologe, 35(9):583–586, 1995.

[43] M. Kass, A. Witkin, D. Terzopoulos. Snakes: Active Contour Models. IEEE First International Conference on Computer Vision, pages 259–268, 1987.

[44] Arie Kaufman, editor. Volume Visualization. IEEE Computer Society Press, Los Alamitos, CA, 1991.

[45] Arie Kaufman, Karl Heinz H¨ohne, Wolfgang Kr¨uger, Larry J. Rosenblum, and Peter Schr¨oder. Research issues in volume visualization. IEEE Comput. Graphics Appl., 14(2):63–67, 1994.

[46] M. L. Kessler and D. L. McShan. An application for design and simulation of conformal radiation therapy.

In Richard A. Robb, editor, Visualization in Biomedical Computing 1994, Proc. SPIE 2359, pages 474–483, Rochester, MN, 1994.

[47] Ron Kikinis, Martha E. Shenton, Dan V. Iosifescu, Robert W. McCarley, Pairash Saiviroonporn, Hiroto H.

Hokama, Andre Robatino, David Metcalf, Cynthia G. Wible, Chiara M. Portas, Robert M. Donnino, and Fer-enc A. Jolesz. A digital brain atlas for surgical planning, model driven segmentation, and teaching. IEEE Trans.

Visualization Comput. Graphics, 2(3):232–241, 1996.

[48] Teuvo Kohonen. Self-Organisation and Associative Memory. Springer-Verlag, Berlin, 2. edition, 1988.

[49] Wolfgang Krueger and Bernd Froehlich. The responsive workbench. IEEE Comput. Graphics Appl., 14(3):12–

15, 1994.

[50] D. Laur and Pat Hanrahan. Hierarchical splatting: A progressive refinement algorithm for volume rendering.

Comput. Graphics, 25(4):285–288, 1991.

[51] J. Leugyel, D.P. Greenberg, and Popp R.˙ Time-Dependent Three-Dimensional Intravascular Ultrasound. In Computer Graphics Proceedings, pages 457–464, Los Angeles, 1995. SIGGRAPH.

[52] M. Levoy. Display of Surface from Volume Data. IEEE Computer Graphics and Applications, 8(3):29–37, 1988.

[53] Marc Levoy. Display of surfaces from volume data. IEEE Comput. Graphics Appl., 8(3):29–37, 1988.

[54] Marc Levoy. A hybrid ray tracer for rendering polygon and volume data. IEEE Comput. Graphics Appl., 10(2):33–40, 1990.

[55] Lun-Jou Lo, Jeffrey L. Marsh, Michael W. Vannier, and Vikas V. Patel. Craniofacial computer-assisted surgical planning and simulation. Clin. Plast. Surg., 21(4):501–516, 1994.

[56] William E. Lorensen and Harvey E. Cline. Marching cubes: A high resolution 3D surface construction algorithm.

Comput. Graphics, 21(4):163–169, 1987.

[57] Maria Magnusson, Reiner Lenz, and Per-Erik Danielsson. Evaluation of methods for shaded surface display of CT volumes. Comput. Med. Imaging Graph., 15(4):247–256, 1991.

[58] D. Marr and E. Hildreth. Theory of edge detection. Proc. R. Soc. Lond., B 207:187–217, 1980.

[59] Stephen R. Marschner and Richard J. Lobb. An evaluation of reconstruction filters for volume rendering. In R. Daniel Bergeron and Arie E. Kaufman, editors, Proc. Visualization ’94, pages 100–107, Los Alamitos, CA, 1994. IEEE Computer Society Press.

[60] Wido Menhardt. Iconic fuzzy sets for MR image segmentation. In A. E. Todd-Pokropek and Max A. Viergever, editors, Medical Images: Formation, Handling and Evaluation, volume 98 of NATO ASI Series F, pages 579–

591. Springer-Verlag, Berlin, 1992.

[61] R. Millner. Ultraschalltechnik, grundlagen und anwendungen.

[62] B. S. Morse, Stephen M. Pizer, and A. Liu. Multiscale medial analysis of medical images. Image Vision Comput., 12(6):327–338, 1994.

[63] Shigeru Muraki. Volume data and wavelet transforms. IEEE Comput. Graphics Appl., 13(4):50–56, 1993.

[64] Shigeru Muraki. Multiscale volume representation by a DOG wavelet. IEEE Trans. Visualization Comput.

Graphics, 1(2):109–116, 1995.

[65] P.J. Neugebauer. Interactive Segmentation of Dentistry Range Images in CIM Systems for the Construction of

[65] P.J. Neugebauer. Interactive Segmentation of Dentistry Range Images in CIM Systems for the Construction of