• No results found

Appendix 4.B Bias in Correction Phase

5.3 Our Approach

5.3.2 Basic Design

Figure 5.1 shows a schematic overview of our design. The application layer (in our case an X3D-browser [8]) manages the high-level application logic and mirrors the state of its scene in a low-level OpenSG scene graph [32, 106]. The OpenSG-layer (and all layers below) are only concerned with the current state of components (Transforms, Materials, Geometries, etc.) and not with procedures that change this state (animation, physics, I/O, etc.).

The mediator layer is the main subject of this chapter. It has to be imple-mented for each rendering back-end (although parts can be reused as shown in Section 5.4.1). It usually consists of a viewport, a scene adapter, and a context. With the viewport, the mediator can hook itself into OpenSG’s rendering infrastructure. It is the only part of a mediator that is manda-tory. The scene adapter translates the OpenSG scene into the renderers

Cluster

Figure 5.1: Schematic overview of our architecture.

internal representation and keeps it up-to-date. The context manages stances of the back-end and allows multiple viewports to share these in-stances.

Note that there is no direct dependency from the application/OpenSG layer into the mediator layer (only via the default OpenSG Viewport interface) and no dependency from the mediator into the application layer (only into the OpenSG layer, a mediator potentially works with all OpenSG applications).

Also, the mediator depends on the back-end, but not the other way around.

This takes care of our requirements of extensibility and generality and non-intrusiveness. Since the back-end is fed with its own scene representa-tion and simply asked to fill a viewport, it usually can maintain near-optimal rendering performance. Relying on viewports also enables a simple (but usually sufficient) way to domixed (hybrid) rendering. Viewports can be layered on top of each other in order to combine the images of different back-ends using z-buffering and alpha-blending. The ability to extend the ap-plication layeris provided by OpenSG’s attachment mechanism [32], which allows to attach arbitrary data to nodes. The application layer only needs to pack data for extensions into attachments, which are then interpreted by

mediators that understand the extensions and ignored by others. How the system meets the requirements fast incremental updates and clustering and stereo is particularly interesting and described in Sections 5.3.3 and 5.3.4. For the remainder of this section, however, we focus on the viewport and scene adapter mechanisms.

5.3.2.1 The OpenSG Scene

The OpenSG scene in the scheme in Figure 5.1 is usually a stripped down scene graph with only a few node types. We intentionally did not strictly define which nodes a mediator has to understand, unknown nodes can simply be ignored by the scene adapter. Node types that are supported by all our renderers are Transform, Geometry, two types of Materials, Lights, Camera, and Background. This seems to be the minimal set a renderer needs to produce meaningful pictures.

Transforms are simple 4 4 matrix transforms with n children (i.e. they are also Group nodes). Geometry will usually be an indexed triangle mesh, but since OpenSG provides the TriangleIterator interface it is almost always possible to convert an arbitrary geometry into a triangle soup, which can be handled by most renderers. This includes parametric surface patches. (Of course, a back-end can also choose to interpret them directly, if they are sup-ported.) As far as materials go, we take a pragmatic approach and provide two pre-defined declarative materials, a simple one based on the OpenGL 1.1 material model and a more complex one (CommonSurfaceShader, see Ap-pendix 5.A) based on a recent X3D extension proposal [123]. The Common-SurfaceShader is quite powerful and supports features like perfect specular reflection/refraction and bump mapping. However, it cannot describe every appearance adequately, so a back-end can also interpret ShaderChunks that may contain explicit shader code in addition to the declarative material nodes.

This mechanism offers greater flexibility, but will incur a loss of portability.

Every mediator we implemented so far supports at least point, spot, and di-rectional lights. The ray tracers also support area lights and lightprobes, see Section 5.4.1. Currently all back-ends use a simple pinhole camera, which can be easily translated into a view/projection matrix (for rasterization) or a view

frustum (for ray tracing). Furthermore, a simple mono-colored background is supported, as well as a skydome.

5.3.2.2 Viewport

Each mediator exposes a specialized OpenSG-Viewport which internally maps to the underlying renderer. So, every time OpenSG (on behalf of the applica-tion) wants a viewport to be rendered, the back-end is invoked to render its (sub-)scene. The target is usually the OpenGL back buffer, but it can also be another render target. We could even implement a viewport that renders an image to disk or streams a video to a website.

The back-ends are invoked exclusively through this specialized Viewport class. If the application wants to use a certain back-end, it just creates the corresponding viewport, assigns camera, root-node, and background to this viewport and attaches it to an OpenSG-Window. The viewport then creates the entire infrastructure necessary to convert the scene and instan-tiates the underlying renderer. Usually this is done lazily upon the first render-request, but other patterns are possible. The viewport also provides the interface to set parameters of the back-end that are not tied to geom-etry, materials, or other objects (e.g. antialiasing options or maximum ray depth).

Using a viewport in such a way also allows us to elegantly use (some may say: abuse) OpenSG’s stereo and clustering capabilities. Stereo rendering is possible by simply using two viewports, one for each eye (layered on top of each other, side-by-side, or even on different machines). In order to prevent wasting resources in such a setup, the viewports usually share the underly-ing converted scenes and other resources via ref-counted contexts. Arbitrary clustering setups are described in Section 5.3.4. Since viewports can be deac-tivated and acdeac-tivated on-the-fly, layered viewports can also be used to quickly switch from one back-end to another. For example, one can use rasterization for navigation and then seamlessly switch to ray tracing once an interest-ing viewpoint has been reached. Also, common post processinterest-ing effects can

be attached to the viewport (the anaglyph encoding in Fig. 5.9 is a simple example).

5.3.2.3 Scene Adapter

The scene adapter is responsible for mapping the OpenSG scene to a repre-sentation the back-end can use. This is usually where the bulk of the work has to be done when implementing a new mediator layer. In some cases, the renderer may be able to use the OpenSG scene directly, or at least parts of it (e.g. an OpenGL-based forward or deferred renderer), but often a conversion of the scene will be necessary. In this case, the adapter will usually tra-verse the whole OpenSG scene graph once in the first render call and build a shadow scene by converting objects such as geometries, materials, and lights into suitable representations. Note that this does not have to be (and usually is not) a one-to-one mapping, the case studies in Section 5.4 show examples.

However, the handling of incremental updates (described in the following subsection) requires to quickly identify representatives that need updating as a result of a change. Therefore, usually several maps are build during the initial conversion, which serve as a scene dictionary.