Magnification and Analysis
By Neal Wadhwa, Hao-Yu Wu, Abe Davis, Michael Rubinstein, Eugene Shih, Gautham J. Mysore,
Justin G. Chen, Oral Buyukozturk, John V. Guttag, William T. Freeman, and Frédo Durand
The world is filled with important, but visually subtle signals.
A person’s pulse, the breathing of an infant, the sag and
sway of a bridge—these all create visual patterns, which are
too difficult to see with the naked eye. We present Eulerian
Video Magnification, a computational technique for visualizing subtle color and motion variations in ordinary videos by making the variations larger. It is a microscope for
small changes that are hard or impossible for us to see by
ourselves. In addition, these small changes can be quantitatively analyzed and used to recover sounds from vibrations
in distant objects, characterize material properties, and
remotely measure a person’s pulse.
A traditional microscope takes a slide with details too small
to see and optically magnifies it to reveal a rich world of
bacteria, cells, crystals, and materials. We believe there is
another invisible world to be visualized: that of tiny motions
and small color changes. Blood flowing through one’s face
makes it imperceptibly redder (Figure 1a), the wind can
cause structures such as cranes to sway a small amount
(Figure 1b), and the subtle pattern of a baby’s breathing can
be too small to see. The world is full of such tiny, yet meaningful, temporal variations. We have developed tools to visualize these temporal variations in position or color, resulting
in what we call a motion, or color, microscope. These new
microscopes rely on computation, rather than optics, to
amplify minuscule motions and color variations in ordinary and high-speed videos. The visualization of these tiny
changes has led to applications in biology, structural analysis, and mechanical engineering, and may lead to applications in health care and other fields.
We process videos that may look static to the viewer,
and output modified videos where motion or color changes
have been magnified to become visible. In the input videos,
objects may move by only 1/100th of a pixel, while in the
magnified versions, motions can be amplified to span many
pixels. We can also quantitatively analyze these subtle signals
to enable other applications, such as extracting a person’s
heart rate from video, or reconstructing sound from a distance by measuring the vibrations of an object in a high-speed
video (Figure 1c).
The algorithms that make this work possible are sim-
ple, efficient, and robust. Through the processing of local
color or phase changes, we can isolate and amplify signals
of interest. This is in contrast with earlier work to amplify
small motions13 by computing per-pixel motion vectors
and then displacing pixel values by magnified motion
This Research Highlight is a high-level overview of three
papers about tiny changes in videos: Eulerian Video Mag-
nification for Revealing Subtle Changes in the World,
Phase-Based Video Motion Processing,
22 and The Visual
To compare our new work to the previous motion-vec-
tor work, we borrow terminology from fluid mechanics.
In a Lagrangian perspective, the motion of fluid particles is
tracked over time from the reference frame of the particles
themselves, similar to observing a river flow from the mov-
ing perspective of a boat. This is the approach taken by
the earlier work, tracking points in the scene and advect-
ing pixel colors across the frame. In contrast, an Eulerian
perspective uses a fixed reference frame and characterizes
fluid properties over time at each fixed location, akin to an
observer watching the water from a bridge. The new tech-
niques we describe follow this approach by looking at tem-
poral signals at fixed image locations.
The most basic version of our processing looks at intensity variations over time at each pixel and amplifies them.
This simple processing reveals both subtle color variations and small motions because, for small sub-pixel
motions or large structures, motion is linearly related
to intensity change through a first-order Taylor series
expansion (Section 2). This approach to motion magnification breaks down when the amplification factor is
large and the Taylor approximation is no longer accurate. Thus, for most motion magnification applications
we develop a different approach, transforming the image
into a complex steerable pyramid, in which position is
explicitly represented by the phase of spatially localized
sinusoids. We exaggerate the phase variations observed
over time, modifying the coefficients of the pyramid
representation. Then, the pyramid representation is collapsed to produce the frames of a new video sequence that
shows amplified versions of the small motions (Section 3).
Both Eulerian approaches lead to faster processing and
fewer artifacts than the previous Lagrangian approach.
However, the Eulerian approaches only work well for small
motions, not arbitrary ones.
Making small color changes and motions visible adds
a dimension to the analysis that goes beyond simply