• No results found

By properly choosing the filter weights, time information can be incorporated in the filter, e.g., weighting the central pixel more heavily. Kth nearest neighbor median filter and two-dimensional in-place growing filter[5] are two examples of filters obtained by proper weighting.

Multistage The standard median filter performs poorly on multi-dimensional signals with a high level of fine details. For example, in images, thin lines and sharp edges are not preserved. Such details are, as mentioned above, very impor-tant to the human perceptual system, and consequently standard median filtering can cause severe visual degradation. Thus, several efforts have been made to take into account structural information. One such attempt gives us the multistage me-dian filter:

Multistage median filers can preserve details in horizontal, vertical and diagonal directions, due to the corresponding sub-filters.

3.2 L-filters

An important generalization of the median filter is the L-filter[4]:

, 1

2 3 ) (3.4)

where0 is the size of the filter, ) are the ordered window samples, and

0 , are weight coefficients.

By using the weight coefficients

if

0

otherwise

one obtains the standard median filter, and by using the weight coefficients

0 one gets the standard running average filter. Setting all the coefficients to zero except for theth rank order operation[4] is obtained. Obvious mod-ifications leads to the max/min filter[43]. The -trimmed mean filter[41] is also obtainable by properly setting the filter coefficients.

3.2.1 Adaption to different noise distributions

The filter coefficients can be chosen to satisfy an optimality criterion that is related to the probability distribution of the input noise. Considering a constant signal corrupted by zero-mean white noise, we can model the output as

+

where is the observed output, is the constant signal, and+ are independent identically distributed random variables satisfying +

. Assuming that the noise distribution is symmetric, the condition that , is satisfied by impos-ing the constraint:

+ ) . Minimizing the above function with the constraint (3.5) can be done using Lagrange multipliers. The Lagrangian function is given by

H

Setting the derivatives with regards to

equal to zero gives, assuming

Plugging this back into (3.7) yields

!

!

(3.8) Thus, having the noise correlation matrix, , one can easily obtain the filter co-efficients using (3.8). A more general design scheme for applications involving non-constant known signals is given in [4].

3.2.1.1 Computation of the correlation matrix

Evaluation of the in (3.8) requires expressions for the marginal and the bivariate densities of+ ). Denoting the parent distribution and density of the noise as and

The symmetric correlation matrix is obtained by integration:

The complexity of these equations makes numerical integration generally neces-sary, even for simple parent distributions of the original noise.

The resulting optimal coefficients for several noise distributions, and for 0

, is found in [4]. The results for the uniform and normal distributions are their corresponding maximum likelihood estimators, i.e., the midpoint,

1 for the uniform case, and the average, 0 0 , for the normal distribution case. Generally, the results confirm the statement in section 3.1.2 on page 19 in that the weights located in the center becomes more pronounced as the noise distribution grows heavier tailed.

3.2.1.2 Using empirical data

If the desired filter-output at each sample is known, the minimization of

,

(3.9) where, is the filter output and the desired output, is obtained by the filter coeffi-cients [40]:

By explicitly indexing the input-samples, (3.10) can be written as

In an on-line, or sample-by-sample estimate, the updates would be of the least

Perception-related cost functions Palmieri and Croteau[40] introduce in addi-tion a factor, , in (3.9), yielding a modified mean squared error function,

% ,

where is a feature factor signaling the importance of a close fit at sample . The feature extractor could be an edge detector or generally an image-dependent parameter that reflects the relevance of that specific image area to good image perception. The resulting filter coefficients are:

While the L-filter operates on the ordered input, ), losing spatial information, the linear FIR filter operates in the spatial domain, not utilizing order information. The Ll-filter[39] is a generalized L-filter combining information both before and after ranking.

The output of a linear filter can be written

, sample values. The corresponding L-filter can be written

,

contains the filter weights, and

is a0 0 permutation matrix sort-ing the elements of in ascending order.

To account for both arrangements of the linear and L-filter, one would need 0

coefficients. Namely, a data sample in the window is multiplied by a different coef-ficient according to its position both before and after ranking. A simplified version of the estimator that needs onlyH0 coefficients is what is called the Ll-filter and is given by [39]:

where +

1 and +

1 .

The mean square error surface is generally a non-convex function. However, ob-serving that the function becomes convex if either

or

is held fixed, one can reach a solution using bilinear parameterization: Fix

to the new value and optimize , and so on. The on-line updates can be written [39]:

where and are step parameters. Bilinear parameterization procedures gen-erally converge to local minima, but using several random starting parameters mit-igates the problem [11].