5. 4. Discussion
While our method can fail in the presence of excessive
noise or when extreme parameter settings are used (e.g.,
the lenna picture in supplemental material has a high
level of noise), we found that our filters are very robust and
behave well over a broad range of settings. Figure 15 shows
a variety of parameters values applied to the same image
and the results are consistently satisfying, high-quality, and
halo-free; many more such examples are provided in supplemental material. While the goal of edge-aware processing can be ill-defined, the results that we obtain show that
our approach allows us to realize many edge-aware effects
with intuitive parameters and a simple implementation.
The current shortcoming of our approach is its running
time. We can mitigate this issue, thanks to the multiscale
nature of our algorithm, allowing us to generate quick previews that are faithful to the full-resolution results (Figure
16). Furthermore, the algorithm is highly parallelizable and
should lend itself to a fast GPU implementation. Beyond
these practical aspects, our main contribution is a better
characterization of the multiscale properties of images.
(a) Edge-aware wavelets (b) Close-up
(c) Our result (d) Close-up
Figure 10. The extreme contrast near the light bulb is particularly
challenging. Images (a) and (b) are reproduced from Fattal. 12 The
edge-aware wavelets suffer from aliasing and generate an irregular
edge (b). In comparison, our approach (d) produces a clean edge. We
set our method to approximately achieve the same level of details
(s = log( 3. 5), a = 0.5, b = 0).
results with a low global contrast (b = 0) and high local
details (a = 0.25). In general, the results produced by our
method did not exhibit any particular problems (Figure
12). We compare exaggerated renditions of our method
with Farbman et al. 11 and Li et al. 21 Our method produces
consistent results without halos, whereas the other
methods either create halos or fail to exaggerate detail
(Figure 13).
One typical difficulty we encountered is that sometimes the sky interacts with other elements to form high-frequency textures that undesirably get amplified by our
detail-enhancing filter (Figures 8b and 14). Such “
misinterpretation” is common to all low-level filters without semantic understanding of the scene, and typically
requires user feedback to correct. 22
We also experimented with inverse tone mapping,
using slope values b larger than 1 to increase the dynamic
range of a normal image. Since we operate on log intensities, roughly speaking, the linear dynamic range gets
exponentiated by b. Applying our tone-mapping operator on these range-expanded results gives images close to
the originals, typically with a PSNR between 25 and 30 dB
for b = 2. 5. This shows that our inverse tone mapping preserves the image content well. While a full-scale study on
an HDR monitor is beyond the scope of this paper, we
believe that our simple approach can complement other
relevant techniques (e.g., Masia et al. 25). Sample HDR
results are provided in supplemental material.
(a) Uncorrected bilat. filter (b) Close-up
(c) Our result (d) Close-up
Figure 11. The bilateral filter sometimes oversharpens edges,
which can leads to artifacts (b). We used code provided by Paris and
Durand26 and multiplied the detail layer by 2. 5 to generate these
results. Although such artifacts can be fixed in postprocessing,
this introduces more complexity to the system and requires new
parameters. Our approach produces clean edges directly (d). We
set our method to achieve approximately the same visual result
(s = log( 2. 5), a = 0.5, b = 0).