Theorem 4. Suppose u ∈ C2(W), and let r, h, a > 0 such
that r, h →0 and h = O(r). Let us consider the continuous
function �g defined by for t ≠0, where
Let �f be the continuous function defined by
Then, for x ∈ W,
According to Theorem 4 the Yaroslavsky neighborhood
filter acts as an evolution PDE with two terms. The first term
is proportional to the second derivative of u in the direction
x, which is tangent to the level line passing through x. The
second term is proportional to the second derivative of u in
the direction h, which is orthogonal to the level line passing
through x.
The weighting coefficient of the tangent diffusion,
uxx, is given by �g ( |Du|). The function �g is positive and
decreasing. Thus, there is always diffusion in that direction. The weight of the normal diffusion, uhh, is given by �f (
|Du|). As the function �f takes positive and negative values (see Figure 1), the filter behaves as a filtering/enhanc-ing algorithm in the normal direction depending on |Du|.
The intensity of the filtering in the tangent diffusion and
the enhancing in the normal diffusion tend to zero when
the gradient tends to infinity. Thus, points with a very
large gradient are not altered.
The neighborhood filter asymptotically behaves as
the Perona–Malik equation, 35 also creating shocks inside
smooth regions (see Buades et al. 7 for more details on this
comparison).
3. nL-means aLGoRIthm
Given a discrete noisy image v = {v(i ) | i ∈ I}, the estimated
value NL[v](i), for a pixel i, is computed as weighted average
of all the pixels in the image:
figure 1. magnitude of the tangent diffusion (continuous line) and
normal diffusion (dashed line – –) of theorem 4.
0.2
0.15
0.1
0.05
2
4
6
8
-0.05
-0.1
where the family of weights {w(i, j)}j depend on the similarity between the pixels i and j and satisfy the usual conditions 0 ≤ w(i, j) ≤ 1 and ∑j w(i, j ) = 1.
The similarity between two pixels i and j depends on
the similarity of the intensity gray level vectors v(Ni) and
v(Nj), where Nk denotes a square neighborhood of fixed
size centered at a pixel k. This similarity is measured as a
decreasing function of the weighted Euclidean distance,
½½v (Ni) − v(Nj)½½ 2 2, a , where a > 0 is the standard deviation
of the Gaussian kernel. The expectation of the Euclidean
distance of the noisy neighborhoods is
This equality shows the robustness of the algorithm since
in expectation the Euclidean distance preserves the order
of similarity between pixels.
figure 2. q1 and q2 have a large weight in nL-means because their
similarity windows are similar to that of p, while the weight w(p, q3)
is much smaller.