• No results found

Shadow Retargeting

N/A
N/A
Protected

Academic year: 2022

Share "Shadow Retargeting"

Copied!
6
0
0

Laster.... (Se fulltekst nå)

Fulltekst

(1)

Shadow Retargeting

Additional Supplemental Material

(2)

Figure 1: Comparisons of ground truth with our shadow retargeting scheme. Shows the effect of progression of point to area lighting retargeted from a rendered source image with character in initial and bind poses for one light source.

(3)

Figure 2: Comparisons of ground truth with our shadow retargeting scheme. Shows the effect of progression of point to area lighting retargeted from a rendered source image with character in initial and bind poses for two light sources.

(4)

Figure 3: Comparisons of ground truth with our shadow retargeting scheme. Synthetic rendered classic light environments retargeted showing quality of approximation with retargeted area shadows.

(5)

Figure 4: Comparisons of ground truth with our shadow retargeting scheme. Real 3D printed non-deformed object’s shadow retargeted with single/double light sources, and hard/soft shadows.

(6)

Figure 5: Transition between keyframes A and B using Shadow Retargeting to generate laborious in-between frames.

Referanser

RELATERTE DOKUMENTER

Figure 3: The left image shows the number of rays used to compute each pixel (accuracy 95%).. The right image shows the original image for comparisons (see Figure 9 for a

Traditionally, hardware rasterizers only support the Phong lighting model in combination with Gouraud shading using point light sources.. However, the Phong lighting model is

• Support of point and directional light sources as well as image based lighting at interactive frame rates2. • A simple, but efficient technique to calculate dynamic shadows caused

This information is used in the rendering pass of the final image to determine whether a point lies in shadow by comparing its light source with the shadow map distance, which can

image-based lighting ideas to reduce the required equip- ment to a single light source and single camera; we replace trial-and-error light repositioning with optimization and on-

Unlike existing PTM capture methods requiring known light source positions, we rely on the user to position a handheld light source, and recover the lighting direction from the

Table 1 shows all the data for the input images shown in Figure 4: for each light source present in the scene, we show the real measured locations of the light sources, the

Figure 3 shows two examples of image processing: the first column shows the original images, the central column shows the shadow masks used for correction, the right col- umn shows