• No results found

Interactive Exploration of Complex 3D Models

Previous Work

3.3 Interactive Exploration of Complex 3D Models

When exploring complex, i.e., highly detailed, 3D models, with information that is to be found at multiple scales, it is easy for the inexperienced user to lose spatial context (R5,R6). For that reason, interaction methods which allow natural exploration of the whole 3D model from a wide range of distances (i.e., global shape, close surface inspection) are required. In addition, in order to support ubiquitous exploration of 3D models we need to consider a wide range of display and user interface configurations, including mobile devices and Web browsers, but also large display installations (R2,R7,R9). In this section we will discuss the most relevant approaches that deal with these problems.

3.3.1 Motion Control for Virtual Exploration

In the context of visualization of massive complicated scenes, users require inter-active control to effectively explore the data (R5). Most of the work in this area is connected to camera/object motion control [Chri 09,Jank 13]. Variations of the virtual trackball [Chen 88,Shoe 92,Henr 04], which decompose motion into pan, zoom, and orbit, are the most commonly employed approaches. In order to solve thelost-in-space problemand avoid collisions with the environment, Fitzmaurice et al. [Fitz 08] have proposed the Safe Navigation Technique, which, however, requires explicit positioning of rotation pivot, and thus needs precise pointing.

Moreover, motion decomposition and pivot positioning can be difficult for novice

Chapter 3.Previous Work

26

users. For this reason, a number of authors have proposed techniques for effec-tive object inspection that constrain viewpoint by using precomputed/authored camera properties [Hans 97,Burt 02,Burt 06], increasing authoring time and limiting effective view selection. Proximal surface navigation methods constrain the camera to stay in a region around the object and with a specific orientation with respect to the visible surface [Khan 05,McCr 09,Moer 12,Mart 12b]. These methods, following a hovercraft metaphor, slide a target point on the surface and compute camera positions so as to look at this point from a constant distance and at a good orientation. Surface/depth smoothing and algorithmic solutions are used to reduce jerkiness and deal with singularities and out-of-view-field collisions. Moreover, disconnected surfaces are difficult to handle with this approaches.

3.3.2 Image-assisted Exploration

Using views to help navigate within a 3D data-set is often implemented with thumbnail-bars. At any given time, one image of the data-set can be selected by the user as current focus. For a 3D navigation application [Lipp 80,Snav 06], where images are linked to viewpoints, selecting the focus image drives the setup of the virtual camera. Often, these images are also linked to additional information, which is displayed when the user reaches selects them, as an al-ternative to the usage of hot-spots [Andu 12,Beso 08]. The organization of the images in this kind of tools can be considered a challenging problem in many CH scenarios, since simple grid layout approaches do not scale up well enough with the number of images. A trend consists in clustering images hierarchically, ac-cording to some kind of image semantic, like combining time and space [Ryu 10], spatial image-distances [Epsh 07,Jang 09], or a mixture of them [Mota 08] to automatically, or interactively [Girg 09,Cram 09] compute image-clusters. Most of these works strive to identify good clustering for images, rather than good way to dynamically present and explore the clustered data-set. Goetzelmann et al.

[Gotz 07], presented a system to link textual information and 3D models, where links on the text are associated to predefined points of view, and the definition of a point of view activates a mechanism proposing contextual information.

3.3.3 Small Screen Interfaces for 3D Model Exploration

Most of the systems described in the previous Sec.3.3.1require that the user has direct control over the visualization space, but this solution can be ineffective when the visualization area is very small (R9), like it happens in smartphones or when explore model-details (i.e., very close views). To solve this problem,

Chapter 3.Previous Work

27

Decle and Hachet [Decl 09] proposed an indirect method based on strokes for moving 3D objects in a touch screen mobile phone, while Kratz et al. [Krat 10]

introduced an extension of the virtual trackball metaphor, which is typically restricted to a half sphere and single-sided interaction, to actually use a full sphere, by employing the “iPhone Sandwich” hardware extension which allows for simultaneous front-and-back touch input. However, latter solutions are suited for small scale models and they employ fixed center of rotation in the barycenter.

An automatic pivoting method has been recently presented by Trindade and Raboso [Trin 11], but the method requires access to depth buffer, and it is not easily customizable to all mobile systems. Furthermore, their method computes the rotation center as intersection of the viewing vector with the object surface, and it suffers from discontinuities when complex models with sharp features are considered.

3.3.4 Multi-touch Interfaces for 3D Model Exploration

In recent years, researchers started to combine multi-point input technology with interactive 3D graphics. Hancock et al. [Hanc 09, Hanc 07] proposed different approaches using direct and indirect multi-touch interaction techniques for tabletop surfaces to manipulate 2D and 3D content. Martinet et al. [Mart 12a]

have studied the effect of DOF separation in a 3D manipulation task on a direct multi-touch display. In their work, Reisman et al. [Reis 09] proposed co-location constraints for direct-touch object manipulation. The Cubtile has been designed by de la Riviere et al. [Rivi 08], where users interact with 3D content from an indirect multi-touch box-shaped device. Recently, direct touch interfaces have been created for interacting with visualizations in application domains ranging from astrophysics [Fu 10] to oceanography [Butk 11], fluid mechanics [Klei 12], and medicine [Lund 11]. In particular, Yu et al. [Yu 10] proposed a general direct-touch technique allowing users to explore 3D data representations and visualization spaces. Recent work on multi-touch surfaces has also focused on indirect interaction, which is not guided by co-location concerns, and seems particularly interesting for interactive 3D visualization [Mosc 08]. In particular, Knoedel and Hachet [Knoe 11] showed that direct-touch shortens completion times, but indirect interaction improves efficiency and precision, and this is particularly true for 3D visualizations. All these systems are targeted to scientific visualization of complex data, where non-trivial interaction may be required, thus not being suitable for use cases like a museum setting with mostly naive users.

Chapter 3.Previous Work

28

3.3.5 Large Displays and Dual Display Setups

In order to support interactive large display installations (R2) dual display se-tups are typically used. Recently, a number of visualization systems have been presented exploiting two display surfaces. In general, these systems employ an horizontal interactive touch table, and a large display wall visualizing the data, sometimes combining them in a continuous curved display [Weis 10b,Wimm 10].

A two-display volumetric exploration system was proposed by Coffey et al.

[Coff12], which however employs a World-In-Miniature metaphor, simultane-ously displaying a large-scale detailed data visualization and an interactive miniature.

3.3.6 3D Stereoscopic Rendering with Light Field Displays

Recent advances in 3D displays provide an interesting platform for exhibi-tion installaexhibi-tions where highly detailed 3D models can be presented exploit-ing stereoscopic viewexploit-ing, thus providexploit-ing a much more immersive experience (R2, R7). Light field displays provide unrestricted stereoscopic viewing and parallax effects without special glasses or head tracking. They are intrinsically multi-user and can be built by using high-resolution displays or, alternatively, multi-projector systems with parallax barriers or lenticular screens. The light field display hardware employed for this work is manufactured by Holografika (see www.holografika.com) and is commercially available. It uses a specially arranged projector array, driven by a cluster of PCs, and a holographic screen.

Large, multi-view light field displays require generating multiple images, one for each available perspective. In state-of-the art rendering methods for such dis-plays, multiple center of projection (MCOP) geometries [Jone 07] and adaptive sampling [Agus 08] are exploited to fit with the display geometry and the finite angular resolution of light beams. One particular characteristic of these displays is the varying resolution depending on the projection depth , which is typically optimal on the screen plane.