As the user sweeps the primitive, the program dynamically
adjusts the progressive profile by sensing the pictorial context in the photograph and automatically snapping to it.
Furthermore, relationships between the various primitive
parts are automatically recognized and preserved by the
program. Using several such 3-Sweep operations, the user
can model 3D parts consistent with the object in the photograph, while the computer automatically maintains global
constraints linking them to other primitives comprising
the object. Using 3-Sweep technology, non-professionals
can extract 3D objects from photographs. These objects can
then be used to build a new 3D scene, or to alter the original
image by changing the objects or their parts in 3D and pasting them back into the photo.
Modeling from images. Images have always been an important resource for 3D modeling. Many techniques use multiple
images or videos to model 3D shapes and scenes.
our focus is on modeling objects from a single photograph.
This task is challenging because of the inherent ambiguity of
the mapping from a 3-dimensional world to a 2-dimensional
image. To overcome this, methods use both constraints on
the type of images that can be used—such as non-oblique
views, and prior assumption on the possible geometry that
can be extracted.
Fully automatic methods use assumption such as symmetry,
18 or the existence of a prior 3D model similar to the one in the photograph.
22 Some limit the geometry to
planes or smooth surfaces, while others limit the application
range (e.g., to architectural models3, 10). There are methods
that automatically fit edges or regions in an image or sketch
15 These are basically 2D methods, generating either 2D models or 3D models by extrusion. Our method
can directly generate complex oblique shapes in 3D space
such as the menorah in Figure 1 and the telescope in Figure 7.
Automatic methods would fail on such examples since their
prior assumptions are not met, or because they rely on region
color or clear edges, which can be missing or occluded.
Other methods use a human-in-the-loop for modeling,
but usually require extensive manual interaction, and are
more time consuming compared to the 3-Sweep approach.
They either require a complete sketch of the object or
tedious labeling before the machine takes control of an
optimization process, while in 3-Sweep user guidance and
automatic snapping are interlaced.
Sketch-based modeling. The task of 3D modeling from a
single image is closely related to the reconstruction or definition of a 3D shape from a sketch.
17 The user either directly
draws the curves of the object1 or fits parts or primitives to
a predefined sketch.
12 Our work was inspired by a tool for
modeling simple 3D objects from sketches presented by
Shtof et al.
19 In that work geo-semantic constraints were
used between primitive parts to define their semantic and
geometric relationships, and connect them to form the final
object. However, their approach is geared towards sketches
and uses a drag and drop interface for primitives that need
to fit the sketch contours.
Computer aided design. The use of constraints in computer-aided design has been studied extensively and allows
the definition of semantic information that relates different
geometric parts in an object. Automatically inferring constraints from the object or its parts’ geometry has been used
for reverse engineering5 and object deformation and editing.
11, 21 Similarly, sweep-based models have been defined
and used since the beginning of the field.
9 While we cannot
report all computer-aided design work aiming at modeling
sweep-based primitives, to our knowledge, none of these
methods have been applied to modeling from photographs,
nor they have paired sweeping with snapping to image edges.
Object-based image editing. Apart from modeling a
3D object, 3-Sweep allows the application of object-based
image editing operations, that previously required extensive
user interaction8 or massive data collection.
Our interactive modeling approach takes as input a single
photo such as the one in Figure 1a. Our goal is to extract a 3D
model whose projection exactly matches the object in the
image. Using the 3-Sweep modeling technique, the user constructs the whole object gradually from its parts. This means
that the object is implicitly decomposed into simple parts,
which are typically semantically meaningful. We define two
types of primitives: both use a piecewise linear centerline to
sweep a cross-section which is assumed to be perpendicular to the centerline at every location. One type of primitive
uses a circular cross-section that can vary in radius along the
sweep, and the other uses a rectangle cross-section that can
change its aspect ratio along the sweep.
Such decomposition is both easy and intuitive for users,
but provides the computer significant information for reconstructing a coherent 3D man-made object from its parts’
projections. The parts are expected to have typical geometric relationships that can be exploited to guide the composition of the whole object. Although the user interacts with the
given photo, she does not need to exactly fit the parts to the
photo or connect them to each other. 3-Sweep automatically
snaps primitive parts to object outlines created from edges,
and connects them to previously defined 3D parts.
To create a single 3D primitive part on the given photo, the
designer uses three strokes. The first two strokes define a 2D
profile of the part and the third stroke defines its main axis,
which is either straight or curved (see Figure 2). Defining the
profile and sweeping the axis are simple operations since
Figure 2. The 3-Sweep paradigm is used to define general cylinder
and cuboid parts.