Local Laplacian Filters:
Edge-Aware Image Processing
with a Laplacian Pyramid
By Sylvain Paris, Samuel W. Hasinoff, and Jan Kautz
DOI: 10.1145/2723694
Abstract
The Laplacian pyramid is ubiquitous for decomposing
images into multiple scales and is widely used for image
analysis. However, because it is constructed with spatially
invariant Gaussian kernels, the Laplacian pyramid is widely
believed to be ill-suited for representing edges, as well as for
edge-aware operations such as edge-preserving smoothing
and tone mapping. To tackle these tasks, a wealth of alternative techniques and representations have been proposed,
for example, anisotropic diffusion, neighborhood filtering,
and specialized wavelet bases. While these methods have
demonstrated successful results, they come at the price
of additional complexity, often accompanied by higher
computational cost or the need to postprocess the generated results. In this paper, we show state-of-the-art edge-aware processing using standard Laplacian pyramids. We
characterize edges with a simple threshold on pixel values
that allow us to differentiate large-scale edges from small-scale details. Building upon this result, we propose a set of
image filters to achieve edge-preserving smoothing, detail
enhancement, tone mapping, and inverse tone mapping.
The advantage of our approach is its simplicity and flexibility, relying only on simple point-wise nonlinearities and
small Gaussian convolutions; no optimization or postprocessing is required. As we demonstrate, our method produces consistently high-quality results, without degrading
edges or introducing halos.
1. INTRODUCTION
Laplacian pyramids have been used to analyze images at
multiple scales for a broad range of applications such as
compression, 6 texture synthesis, 18 and harmonization. 32 However, these pyramids are commonly regarded as a poor
choice for applications in which image edges play an
important role, for example, edge-preserving smoothing
or tone mapping. The isotropic, spatially invariant, smooth
Gaussian kernels on which the pyramids are built are considered almost antithetical to edge discontinuities, which
are precisely located and anisotropic by nature. Further, the
decimation of the levels, that is, the successive reduction by
factor 2 of the resolution, is often criticized for introducing
aliasing artifacts, leading some researchers (e.g., Li et al. 21)
to recommend its omission. These arguments are often
cited as a motivation for more sophisticated schemes such
as anisotropic diffusion, 1, 29 neighborhood filters, 19, 34 edge-preserving optimization, 4, 11 and edge-aware wavelets. 12
While Laplacian pyramids can be implemented using
simple image-resizing routines, other methods rely on more
sophisticated techniques. For instance, the bilateral filter
relies on a spatially varying kernel, 34 optimization-based
methods (e.g., Fattal et al., 13 Farbman et al., 11 Subr et al., 31 and
Bhat et al. 4) minimize a spatially inhomogeneous energy,
and other approaches build dedicated basis functions for
each new image (e.g., Szeliski, 33 Fattal, 12 and Fattal et al. 15).
This additional level of sophistication is also often associated with practical shortcomings. The parameters of anisotropic diffusion are difficult to set because of the iterative
nature of the process, neighborhood filters tend to over-sharpen edges, 5 and methods based on optimization do
not scale well due to the algorithmic complexity of the solvers. While some of these shortcomings can be alleviated in
postprocessing, for example, bilateral filtered edges can be
smoothed, 3, 10, 19 this induces additional computation and
parameter setting, and a method producing good results
directly is preferable. In this paper, we demonstrate that
state-of-the-art edge-aware filters can be achieved with
standard Laplacian pyramids. We formulate our approach
as the construction of the Laplacian pyramid of the filtered
output. For each output pyramid coefficient, we render a
filtered version of the full-resolution image, processed to
have the desired properties according to the corresponding
local image value at the same scale, build a new Laplacian
pyramid from the filtered image, and then copy the corresponding coefficient to the output pyramid. The advantage of this approach is that while it may be nontrivial to
produce an image with the desired property everywhere, it
is often easier to obtain the property locally. For instance,
global detail enhancement typically requires a nonlinear
image decomposition (e.g., Fattal et al., 14 Farbman et al., 11
and Subr et al. 31), but enhancing details in the vicinity of
a pixel can be done with a simple S-shaped contrast curve
centered on the pixel intensity. This local transformation
only achieves the desired effect in the neighborhood of a
pixel, but is sufficient to estimate the fine-scale Laplacian
coefficient of the output. We repeat this process for each
coefficient independently and collapse the pyramid to
produce the final output.
The original version of this paper was published in ACM
Transactions on Graphics (Proceedings of ACM SIGGRAPH
2011) 30, 4 (Aug. 2011), 68:1–68: 12.