• No results found

A Multiresolution Representation for Urban Models

4.2 The BlockMap Representation

4.3.2 BlockMap Generation

The generation of a BlockMap texture is composed of several steps and it is almost totally handled by the graphics hardware. In particular, we use vertex, geometry and fragment shaders to acquire shape and surface attributes from the input dataset, and to build and assemble the various components of the BlockMap. The output of the rendering is redirected to texture memory (render–to-texture) by using OpenGL Framebuffer Objects (FBOs). Whenever a construction step outputs several values, we exploit the Multiple Render Targets (MRT) ability of GPUs to simultaneously write onto several textures.

The following steps are referred to the generation of a (2K+ 256)×K BlockMap, whereK is a user–defined power of two.

HeightMap and Roof/Ground Surface Attributes. The first step of the gen-eration process consists of creating the heightmap of the dataset region enclosed in the bounding box associated to the BlockMap. This is easily done by rendering the input data (or the four child BlockMaps in case of internal nodes) from above the region with an orthogonal viewing volume coincident with the bounding box and a K × K viewport. In our coordinate frame the Z axis maps to the world

vertical direction; thus, for each fragment generated by the rendering process, we map world–space Z coordinate between the bounding box Zbottom and Ztop values to the range [0,1] and write the result on the texture, representing the heightmap.

Moreover, by exploiting MRT, we also output the color–coded surface normal and surface color. The target FBO for this step thus consists of threeK ×K textures:

one 1–channel, 8–bit texture for the heightmap, and two 3–channels, 24–bit textures for the surface normal and color.

Walls Parametrization. Once the heightmap has been generated, it is read back to system memory to identify the discontinuities introduced by vertical surfaces and construct the 8–bit WallsOffsetMap. The parametrization process consists in assign-ing a unique offset to each exposed texel (see Section 4.2.1). We do this by simply assigning to each exposed texel the value of a running counter starting from 1 (the special value 0 is used to identify roof or ground surfaces). We start with one slice comprising the whole heightmap. Whenever the counter exceeds the value of 255, we double the number of slices and repeat the parametrization process independently on each slice until all offsets are in a representable range or the height in texels of the slice reaches a user–defined minimum (note that the minimum height for a slice is 2 texels, as one texel is dedicated to the column average normal). Finally, the resulting WallsOffsetMap is uploaded to texture memory to be used in the following steps.

This parametrization process could be also implemented to run on the GPU, for example by using parallel prefix sum (i.e. scan) and stream compaction techniques [53, 122]. However, the computationally simple process and the small memory foot-print of the heightmap and the WallsOffsetMap allow us to use the CPU without introducing significant delays.

Wall Surfaces Attributes. To acquire surface attributes belonging to vertical walls, we developed an elegant and effective sampling strategy. We take views of the whole geometry from uniformly distributed directions in the hemisphere centered at the BlockMap center, and reproject the result of each view onto the BlockMap’s prisms. Since each surface point can be seen from several directions, the final value is obtained by gathering all the contributions and combining them to a single value per point.

We developed a GPU–accelerated technique that harnesses the performance and programmability of current hardware to efficiently perform multi–view sampling.

The sampling process is performed by iterating the following steps for each view and each slice of the BlockMap:

1. render surface attributes of input data from the viewpoint with an orthogonal projection to an×n texture (calledAttributeBuffer in Figure 4.9);

2. ray cast the BlockMap with the same viewing settings, targeting rendering to a

Figure 4.9: Walls Attributes Sampling. In the first pass, surface attributes of vertical walls are drawn to an×n AttributeBuffer texture. At the same time, the BlockMap heightmap is ray casted and the corresponding value of the WallsOff-setMap is written. In the second pass, n ×n point primitives are drawn. In the vertex shader, each pointPij fetches the corresponding attributeCij and addressAij. The point is displaced to the final address Aij in a strip of the WallsAttributeMap, where the value of Cij is accumulated. The final pass averages the accumulated attribute values.

n×ntexture (calledAddressBuffer in Figure 4.9); whenever a ray hits vertical surface, the corresponding address in the WallsOffsetMap (i.e. a pointer to a column in thewall textures) is written to the data that should be fetched for shading.

3. set a 256×K floating point texture as rendering target and drawn×n point primitives, one for each pixel of theAttributeBuffer. In the vertex shader, the color corresponding to pixel i, j is fetched from theAttributeBuffer, while the

vertex position is read from the AddressBuffer at the same coordinates.

The purpose of these steps is to build the BlockMap that, when ray casted with the same viewing parameters, would produce the same result as rendering the geometry. This approach is used to sample all the shading attributes in the BlockMap, i.e. color andnormal. For the color, the rendering of geometry at step 1 is done with texture mapping alone (i.e. without lighting), while the value written at step 3 is blended additively with the value possibly present because of other samples fell in the same texel (either in the same or in an earlier view). The same is done for the normals, encoded as color for the sampling. In this manner, colors and normals stored on each texel of the BlockMap slice are averaged among all the views from which the corresponding portion of surface is visible.

On modern GPUs it is possible to set an array of textures as the target color buffer and to select on a per–primitive basis the destination layer on the array. We exploit this capability to simultaneously render to the entire slice set in which wall surfaces have been partitioned. More in detail, in step 2 we also output the slice number associated to the hit column and use it in a geometry shader to redirect the point primitives in step 3 to the corresponding layer of the target texture array.

Ambient Occlusion. For the accessibility, we also need the geometry outside the BlockMap because it may occlude part of the geometry inside the BlockMap. Here, we conservatively assume to use the entire geometry as potential occluder, although we verified that, in practical cases, using only the geometry closer than a fixed distance also leads to good visual results. Step 1 of the wall attributes sampling process is modified by rendering all the potentially occluding geometry and saving the depth buffer. Step 3 is modified by checking, for each point, if the depth value of the corresponding fragment in the AttributeBuffer is greater than the depth value stored at step 1. If not, the contribution of the point is ignored (the sample is occluded), otherwise the value written into the BlockMaps is the value already present plus one. With this approach, the final value per texel corresponds to the number of views from which the corresponding portion of the surface is visible. The accessibility is then obtained by dividing that result by the total number of views.

To calculate the accessibility factors for roof an ground surfaces we adopt a similar procedure, but in this case we do not need to ray cast the heightmap and displace the point primitives in the vertex shader because a one–to–one correspondence from the depth buffer to the render target of the same size is simply established by superimposition.

The computation of the ambient occlusion term is executed only on the leaf nodes of the hierarchy. For internal nodes, the ambient occlusion is considered as a local surface attribute exposed by child BlockMaps and is thus extracted with the same process used to retrieve color and normal information.

Assembly. The final step of the BlockMap generation process consists in assem-bling all the components into the final structure. Before packing, slice textures holding wall colors and ambient occlusion term are read back to system memory for a final processing step. In particular, missing samples that may be present on columns are filled by linear interpolation between the values at the hole boundary, and then the whole texture is downsampled to match the final slice height. Similarly, the slices with surface normal information are collapsed to a single row by averaging the samples present in each column.