80 COMMUNICATIONS OF THE ACM | SEPTEMBER2016 | VOL. 59 | NO. 9
views. These views are one-dimensional due to the second
challenge we face: Suitable cameras to record 2D image
sequences at this time resolution do not exist due to sensor
bandwidth limitations. To solve this, we introduce a novel
hardware implementation to sweep the exposures across a
vertical field of view, to build 3D space–time data volumes.
Third, comprehensible visualization of the captured time-resolved data is non-trivial: It is a novel type of data that we
are not accustomed to seeing, and some observed effects
can be counter-intuitive. We therefore create techniques
for comprehensible visualization of this time-resolved
data, including movies showing the dynamics of real-world
light transport phenomena and the notion of peak-time,
which partially overcomes the low-frequency appearance
of integrated global light transport. Finally, direct measurements of events at this speed appear warped in space–time,
because the finite speed of light implies that the recorded
light propagation delay depends on camera position relative
to the scene. To correct for this, and visualize events in their
correct sequence, we introduce a time-unwarping technique,
which accounts for the distortions in captured time-resolved
information due to the finite speed of light.
In the following, we describe these contributions in detail.
We explain our complete hardware, calibration, and data
processing and visualization pipeline, and demonstrate its
potential by acquiring time-resolved movies of significant
light transport effects, including scattering, diffraction, or
multiple diffuse interreflections. We further discuss possible
applications of this new imaging modality, and the relevance
of this work, not only in imaging, but also in areas such as biomedical research or astronomy.
2. RELATED WORK
2. 1. Ultrafast devices
Repetitive illumination techniques used in incoherent
LiDAR use cameras with typical exposure times on the
order of hundreds of picoseconds, two orders of magnitude slower than our system.
2 The fastest 2D continuous,
real-time monochromatic camera operates at hundreds of
nanoseconds per frame, with a spatial resolution of 200 ×
200 pixels; this is less than one-third of what we achieve in
3 Avalance photodetectors (APD) arrays can reach
temporal resolutions of several tens of picoseconds if they
are used in a photon starved regime where only a single
photon hits a detector within a time window of tens of nanoseconds.
1 Liquid nonlinear shutters actuated with powerful
laser pulses have been used to capture single analog frames
imaging light pulses at picosecond time resolution. Other
sensors that use a coherent phase relation between the illumination and the detected light, such as optical coherence
tomography (OCT), coherent LiDAR, light in flight holography, or white light interferometry, achieve femtosecond resolutions; however, they require light to maintain coherence
(i.e., wave interference effects) during light transport, and
are therefore unsuitable for indirect illumination, in which
diffuse reflections remove coherence from the light. Last,
simple streak sensors capture incoherent light at picosecond to nanosecond speeds, but are limited to a line or low
resolution ( 20 × 20) square field of view.
In contrast, our system is capable of recording and reconstructing space–time world information of incoherent light
propagation in free-space, table-top scenes, at a resolution
of up to 672 × 1000 pixels and under 2 ps per frame. The varied range and complexity of the scenes we can capture allows
us to visualize the dynamics of global illumination effects,
such as scattering, specular reflections, inter-reflections,
subsurface scattering, caustics, and diffraction.
2. 2. Time-resolved imaging
Recent advances in time-resolved imaging have been
exploited to recover geometry and motion around corners,
10, 14, 16, 17 as well as albedo from a single view point.
they all share some fundamental limitations (such as capturing only third-bounce light) that make them unsuitable
for capturing videos of light in motion. The principles we
develop in this paper were first demonstrated by the authors
in two previous publications18, 19; this has given rise to alternative, inexpensive PMD-based approaches (e.g., Ref.
although the achieved temporal resolution is in the order
of nanoseconds (instead of picoseconds). Wu et al.
21 present a rigorous analysis of transient light transport in the
frequency domain, and show how it can be applied to build
a bare-sensor ultrafast imaging system. Last, two recent
publications provide valuable tools for time-resolved imaging: Wu and colleagues20 separate direct and global illumination components from time-resolved data captured with
the system we describe in this paper, by analyzing the time
profile of each pixel, and demonstrate a number of applications; whereas Jarabo et al.
7 present a framework for the
efficient simulation of light-in-flight movies, which enables
analysis-by-synthesis approaches for the analysis of transient light transport.
3. CAPTURING SPACE–TIME PLANES
We capture time scales orders of magnitude faster than the
exposure times of conventional cameras, in which photons
reaching the sensor at different times are integrated into a
single value, making it impossible to observe ultrafast optical phenomena. The system described in this paper has an
effective exposure time down to 1.85ps; since light travels
at 0.3mm/ps, light travels approximately 0.5mm between
frames in our reconstructed movies.
3. 1. System
An ultrafast setup must overcome several difficulties in order
to accurately measure a high-resolution (both in space and
time) image. First, for an unamplified laser pulse, a single
exposure time of less than 2ps would not collect enough
light, so the SNR would be unworkably low. As an example, for a table-top scene illuminated by a 100 W bulb, only
about 1 photon on average would reach the sensor during
a 2ps open-shutter period. Second, because of the time
scales involved, synchronization of the sensor and the illumination must be executed within picosecond precision.
Third, standalone streak sensors sacrifice the vertical spatial dimension in order to code the time dimension, thus