• No results found

Local Image Processing

In document Perception-inspired Tone Mapping (sider 80-86)

5.5 Applications

5.5.2 Local Image Processing

Image processing algorithms are usually applied with uniform parameter settings over the whole image. When the algorithm is localized, the parameters for the method are

5.5. APPLICATIONS 69

source HDR image (linear mapping)

lightness perception tone mapping

decomposition into frameworks

Figure 5.8: Results of lightness perception tone mapping. The left column (apart from the last row) contains the source HDR image shown with the linear mapping. The mid-dle column contains results of the presented tone mapping method. The right column depicts the decomposition into frameworks. Red, blue and green colors depict the dis-tinct frameworks. Higher saturation of the color illustrates stronger anchoring within the framework and the intermediate colors depict the influence of more frameworks on an image area. The HDR images in the 2nd and 5throw from the top courtesy of SpheronVR, and the HDR image in the 4throw courtesy of Byong Mok Oh.

(a) global tone mapping (b) bilateral filtering (c) lightness perception model

Figure 5.9: Comparison of three tone mapping operators: global sigmoid function, local tone mapping with bilateral filter, and lightness perception tone mapping. Color of the mapping functions in the plots correspond to the marked areas of frameworks (inset in image (c)). HDR image courtesy of SpheronVR.

set based on some constant local neighborhood. However, it may often be desired to vary the parameters of such algorithms between the areas of an image. In this case it often has to be done manually by the user. With the use of frameworks decomposi-tion, it is now possible to identify the areas that are perceived homogeneously in an image. For the purpose of automated image processing, this permits to estimate the most appropriate parameters for a given algorithm individually for each framework.

In digital photography, it often happens that an image contains two different sources of illumination – for instance daylight from a cloudy sky and warm indoor tungsten light as in Figure5.10. Such an image requires a white balance correction. However, correcting for the daylight will result in an increased orange cast in the tungsten light.

The decomposition into frameworks allows the identification of such separate areas and enables different white balance correction in each of them. Again, frameworks represented as probability maps guarantee proper blending of edges where differently processed areas merge. One can envisage further possibilities in which our decompo-sition into frameworks reduces the required amount of manual interaction.

5.5.3 Performance

The estimation of lightness in a 4Mpx image using our computational model takes below a minute on a modern PC. The timing mainly depends on the number of decom-posed frameworks, since the majority of computations is spent on the decomposition stage. Once the frameworks are known, the estimation of anchor and net lightness computation consists of simple operations. The K-means algorithm operates on a his-togram and is therefore independent of the image resolution. The only bottleneck is the spatial processing using the bilateral filter, although we use an efficient approach presented by Durand and Dorsey [Durand and Dorsey 2002].

5.6. CONCLUSIONS 71

(a) global white balance (b) white balance within frameworks (c) frameworks

Figure 5.10: The daylight from a cloudy sky dominates the white balance in image (a) and causes the orange color cast in the interior illuminated by a tungsten light.

The frameworks (c) can be used to separate such areas of different illumination and to perform an independent white balance in each of them (b).

5.6 Conclusions

We have presented a computational model of the anchoring theory of lightness per-ception. The model provides a practical implementation of the key concepts of this theory and aims at an accurate estimation of lightness in real world scenes captured as HDR images. We leveraged the theory to handle complex images by developing an automatic method for image decomposition into frameworks. Through the estima-tion of local anchors we formalized the mapping of the luminance values to lightness.

We examined the accuracy of our model by reproducing the results of two perceptual experiments that were initially conducted to prove the accuracy of the theory.

We have demonstrated a novel tone mapping operator which aims at the accurate re-production of lightness perception of real world scenes on low dynamic range displays.

The strength of our operator is especially evident for difficult shots of real world scenes, which involve distinct regions with significantly different luminance levels. Moreover, the decomposition of an image into frameworks gives additional potential for auto-mated image processing fine tuned to the perceptual aspects of the HVS.

Chapter 6

Objective Evaluation of Tone Mapping

Existing tone mapping algorithms can be generalized as a transfer function in the form of a “black box” which converts scene luminances to displayable pixel inten-sities. While the universal goal of such a transfer function is to reduce the original dynamic range and at the same time preserve the original appearance of an HDR im-age, a particular realization of it can be variable and depends on the objectives of a target application. In many cases one may wish to simply obtain nice looking im-ages that resemble the original HDRs, but the requirements may also be more precise:

perceptual brightness match, good visibility of details, equivalent object detection per-formance in tone mapped and corresponding HDR image, etc. In view of the technical limitations and constrained observation conditions for standard displays, such require-ments can only be met at the cost of other image properties. For instance, if an available dynamic range is assigned to enable good visibility of details (local contrasts), there is no dynamic range left to depict global contrast variations in the scene. The trade-off between these conflicting goals is often balanced through an optimization process, but sometimes the design of an algorithm is focused on the requirements and is oblivious to the side-effects. In the end, the overall impact of image processing operations on the perceived image quality or fidelity to the real world appearance is not thoroughly understood.

Recent psychophysical studies attempt to evaluate tone mapping operators in terms of subjects’ preference or fidelity of the real world scene depiction [Drago et al. 2002, Kuang et al. 2004,Ledda et al. 2005,Yoshida et al. 2005]. In such studies each op-erator is treated as a “black box” and its performance is compared on the whole with respect to other operators, without an attempt at understanding the reasons for subjects’

judgments. While some studies of tone mapping operators go further and take into ac-count the reproduction of overall brightness, global contrast or details (local contrast) in dark and bright image regions [Ledda et al. 2005,Yoshida et al. 2005], they remain focused on comparing the operator performance for each of these tasks. These studies, however, provide no deeper analysis of how the pixels of an HDR image have been transformed by tone mapping and in what way the outcome of such a transformation depends on image content.

73

In this chapter, instead of subjective analysis, we focus on developing objective met-rics which could help in understanding how particular image characteristics, such as contrast or brightness, are distorted by tone mapping and determining the impact of such distortions on perceived image quality. We identify that major distortions in tone mapped images with respect to their HDR originals come from global and local con-trast modulations. We create relevant metrics which evaluate the magnitude of Global Contrast Change and Detail Visibility Change along a perceptually meaningful scale and perform a corresponding study of 8 tone mapping algorithms. Besides the eval-uation, the output of these metrics can be used as a feedback for perceptual enhance-ments [Smith et al. 2006].

6.1 Related Work

A number of perception-based visible difference (fidelity) metrics for image pairs have been developed, mostly for image compression and color reproduction applications (refer to [Winkler 2005] for a recent survey of such metrics). State of the art fidelity metrics such as the Visible Differences Predictor (VDP) [Daly 1993] or the Sarnoff Visual Discrimination Model (VDM) [Lubin 1995] include many important character-istics of the HVS, such as eye optic imperfections, luminance masking, the contrast sensitivity function (CSF), and pattern masking, making them very general metrics.

However, such complex metrics may perform worse than simpler metrics specialized for the task of detecting well-defined distortion types, such as blocking artifacts that arise in image compression [Winkler 2005]. The majority of existing fidelity metrics are based on HVS models developed through threshold psychophysical experiments, the goal of which is to determine the magnitude of a simple stimulus so that it becomes just noticeable. Such metrics successfully detect the presence of perceivable image dis-tortions, but perform poorly in estimating the magnitude of suprathreshold distortions and predicting their distraction to the human observer [Chandler and Hemami 2003].

With its spatial features for estimating imperceptible texture details, the iCAM model [Fairchild and Johnson 2003] is an exception. However, since the magnitude of per-ceptual responses to local contrast is not available, it can not be used to determine the change in detail visibility.

In this work, we are mostly concerned with one well defined suprathreshold distortion:

contrast compression due to tone mapping. While much work has been done in the subjective evaluation of different tone mapping operators [Ledda et al. 2005,Yoshida et al. 2005], to our knowledge, we present the first feature-based characterization and objective perceptual measure of tone mapping distortion. Since fidelity metrics dealing with image pairs of significantly different dynamic ranges have not so far been pro-posed, and since we have found existing models to be ill-suited for our purposes, we present custom fidelity metrics for comparing perceived contrast differences between an original HDR image and its tone mapped LDR counterpart.

In document Perception-inspired Tone Mapping (sider 80-86)