92 COMMUNICATIONS OF THE ACM | NOVEMBER2017 | VOL. 60 | NO. 11
but sweeping is often slower due to the large number of
sweeps required to obtain accurate results. 16
One drawback of these methods is that they do not reuse
information: the distance to different source sets γ must be
computed entirely from scratch each time. Also note that
both sweeping and marching present challenges for parallelization: priority queues are inherently serial, and irregular
meshes lack a natural sweeping order.
In a different development, Mitchell et al. 27 give an
O(n2 log n) algorithm for computing the exact polyhedral
distance from a single source to all other vertices of a triangulated surface. Surazhsky et al. 40 demonstrate that this
algorithm tends to run in sub-quadratic time in practice,
and present an approximate O(n log n) version of the algorithm with guaranteed error bounds; Bommes and Kobbelt4
extend the algorithm to polygonal sources. Similar to fast
marching, these algorithms propagate distance information in wavefront order using a priority queue, again making
them difficult to parallelize. More importantly, the amortized cost of these algorithms (over many different source
subsets γ) is substantially greater than for the heat method
since they do not reuse information from one subset to the
next. Finally, although40 greatly simplifies the original formulation, these algorithms remain challenging to implement and do not immediately generalize to domains other
than triangle meshes.
Closest to our approach is the recent method of Rangarajan
and Gurumoorthy, 32 who do not appear to be aware of
Varadahn’s formula—they instead derive an analogous relationship between the distance function and solutions ψ to
the time-independent Schrödinger equation; this derivation
applies only in flat Euclidean space rather than general curved
domains. Moreover, they compute solutions using the fast
Fourier transform, which limits computation to regular grids.
A slight modification of the heat method allows us to compute a smoothed distance function, useful in contexts where
sharp discontinuities can cause subsequent numerical difficulties. Previous smooth distance approximations provide
this regularity at the cost of poor approximation of the true
geometric length10, 14, 21, 33; see Section 3. 3 for a comparison.
Recently, the heat method has facilitated a variety of tasks
in computational science and data analysis. For example,
Huth et al. 15 use fast distance queries to optimize a proba-
bilistic model of cortical organization; van Pelt et al. 41 use
the heat method to assist cerebral aneurysm assessment;
Zou et al. 47 use the heat method for efficient tool path plan-
ning; Solomon et al. 38 leverage our approach to efficiently
solve optimal transport problems on geometric domains;
Lin et al. 20 apply this approach to vector-valued data in the
context of manifold learning. Figure 5 shows a real-world
design application of the heat method based on differen-
tial growth. Various improvements have also been made
to the original algorithm; for instance, de Goes et al. 13 and
Yang and Cohen45 describe two different ways to extend the
method to accurate computation of anisotropic distance; it
has also been adapted to voxelizations6, C1 finite elements, 29
and subdivision surfaces. 12 Finally, Belyaev and Fayolle3
provide a variational interpretation of our method, observ-
ing that more accurate results can be obtained by either
iterating the heat method, or by applying more sophisti-
cated descent strategies.
3. THE HEAT METHOD
A useful feature of the heat method is that the basic
algorithm can be described in the purely continuous setting
(i.e., in terms of curved surfaces, or more generally, smooth
manifolds) rather than in terms of discrete data structures and
algorithms. In other words, at this point one should not imagine that we have chosen a particular data structure (triangle
meshes, grids, point clouds, etc.) or even dimension (2D, 3D,
etc.). Instead, we focus on a general principle that can be
applied on many different domains in different dimensions.
We will later explore several particular choices of spatial and
temporal discretization (Sections 3. 1 and 3. 2); further alternatives have been explored in recent literature. 13, 29, 45
In general, the heat method can be applied in any setting
where one has a gradient operator Ñ, divergence operator Ñ, and
Laplace operator D = Ñ × Ñ—standard derivatives from vector calculus, possibly generalized to curved domains. Expressed in terms
of these operators, the heat method consists of three basic steps:
Algorithm 1 The Heat Method
I. Integrate the heat flow u˙ = Du for some fixed time t.
II. Evaluate the vector field X = −Ñut/|Ñut|.
III. Solve the Poisson equation Dφ = Ñ × X.
The function φ approximates the distance to a given source
set, approaching the true distance as t goes to zero (Equation 1).
For instance, to recover the distance to a single point x we
use initial conditions u0 = δ(x), that is, a Dirac delta encoding an “infinite spike” of heat. More generally we can obtain
the distance to any subset γ by letting u0 be a generalized
Dirac distribution42—essentially an indicator function over
γ; see Figures 3 and 7. Note that since the solution to (III) is
determined only up to an additive constant, final values are
shifted such that the smallest distance is zero.
The heat method can be motivated as follows. Consider
an approximation ut of heat flow for a fixed time t. Unless
ut exhibits precisely the right rate of decay, Varadhan’s
Figure 5. The heat method has been applied to a diverse range of tasks
that demand repeated geodesic distance queries. Here, geodesic
distance drives a differential growth model (left) that is used for
computational design (right). Images courtesy Nervous System/Jesse