1. Adams, A. et al. The Frankencamera: An experimental
platform for computational photography. ACM
Transactions on Graphics 29, 4 (July 2010), article 29.
2. Debevec, P. E. and Malik. J. Recovering high-dynamic-range radiance maps from photographs. In Proceedings
of the ACM SIGGRAPH Conference (Los Angeles, CA,
Aug. 11–15). ACM Press, New York, 2008, 31.
3. Georgiev, T. et al. Spatio-angular resolution trade-offs
in integral photography. In Proceedings of the 17th
Eurographics conference on Rendering Techniques
(Nicosia, Cyprus, June 26–28). Eurographics
Association, Aire-la-Ville, Switzerland, 2006, 263–272.
4. littleBits Electronics Inc., New York; http://littlebits.cc/
5. Manakov, A. et al. A reconfigurable camera add-on for
high dynamic range, multispectral, polarization, and
light-field imaging. ACM Transactions on Graphics 32,
4 (July 2013), article 47.
6. Ng, R. et al. Light-field photography with a hand-held
plenoptic camera. Computer Science Technical Report
CS TR 2, 11 (Apr. 2005), 1–11.
7. Nomura, Y., Zhang, Li, and Nayar, S.K. Scene collages
and flexible camera arrays. In Proceedings of the 18th
Eurographics Conference on Rendering Techniques
(Grenoble, France, June 25–27). Eurographics
Association, Aire-la-Ville, Switzerland, 2007, 127–138.
8. Olympus Corporation, Tokyo, Japan; https://opc.
9. Peleg, S. and Ben-Ezra, M. Stereo panorama with a
single camera. In Proceedings of the Conference on
Computer Vision and Pattern Recognition (Fort Collins,
CO, June 23–25). IEEE Computer Society, Los
Alamitos, CA, 1999.
10. RED Digital Cinema Camera Company, Lake Forest,
11. Reinhard, E. Parameter estimation for photographic
tone reproduction. Journal of Graphics Tools 7, 1 (Nov.
12. Ricoh Company, Ltd., Tokyo, Japan; https://www.ricoh.
13. Schneider, D., Schwalbe, E., and Maas, H.G. Validation
of geometric models for fisheye lenses. Journal of
Photogrammetry and Remote Sensing 64, 3 (May
14. Schweikardt, E. and Gross, M.D. roBlocks: A robotic
construction kit for mathematics and science
education. In Proceedings of the Eighth International
Conference on Multimodal Interfaces (Banff, Alberta,
Canada, Nov. 2–4). ACM Press, New York, 2006, 72–75.
15. USB Implementers Forum, Inc. High Speed USB
Platform Design Guidelines Rev. 1.0; http://www.usb.
16. Yim, M. et al. Modular self-reconfigurable robot
systems: Grand challenges of robotics. IEEE Robotics
&Automation Magazine 14, 1 (Apr. 2007), 43–52.
17. Zhou, C., Miau, D., and Nayar, S. K. Focal Sweep Camera
for Space-Time Refocusing. Technical Report.
Department of Computer Science, Columbia
University, New York, 2012; https://academiccommons.
Makoto Odamaki ( firstname.lastname@example.org) is an
engineer of digital camera systems at Ricoh Company,
Ltd., Tokyo, Japan.
Shree K. Nayar ( email@example.com) is the T.C.
Chang Professor of Computer Science at Columbia
University in New York where he also heads the Columbia
Copyright held by the authors.
Publication rights licensed to ACM. $15.00.
cal filters like diffusion and polarization, as well as more complex ones
(such as a lens array and a teleidoscope).
The Cambits lens-array attachment includes seven acrylic ball lenses to produce a 4D light-field image of the scene. 3
The teleidoscope attachment produces
a kaleidoscope image. An acrylic ball
lens in front of the attachment captures
the scene image, and a set of first-sur-face planar mirrors between the ball
lens and the lens block creates multiple
rotated copies of the image.
The focal stack lens block includes a
linear actuator that physically sweeps
the lens to capture a set of images corresponding to different focus settings.
The linear actuator moves the lens in
steps of 0.05mm, with a total travel distance up to 2.0mm, using a piezoelectric linear actuator to move the lens precisely. The captured stack of images
helps compute an index map that represents the image in which each pixel is
focused. The focal stack lens block then
generates an interactive image that lets
users click on any part of the image to
bring it into focus. 6, 17
We designed Cambits so it would be
possible to insert a rotary actuator between the base and the sensor to scan a
panorama of a scene. If the camera is
rotated off-axis—with an offset between
the rotation axis and the center of projection of the camera—users would be
able to take left and right image strips
from the captured sequence of images
to generate a stereo panorama for creating virtual reality. 9 In the example in Figure 6g, 120 images were taken while the
actuator rotated 120 degrees and the
offset between the rotation axis and the
center of projection of the camera lens
A second rotary actuator can be added to the system to configure a pan/tilt
Cambits is not limited to a single image sensor. Its second base can be used
with two sensor blocks and lenses to
create a stereo camera system with a
baseline of 44mm. Cambits processes
the left and right video streams from
this system in real time to produce a
gray-coded-depth video of the scene.
Cambits can also be used to assemble a microscope that includes an objective lens, a mechanism to adjust the
height of the sample slide to bring the
sample into focus, and an LED light to
Watch the authors discuss
their work in this exclusive
“bright field” illuminate the sample.
The user controls the LED in terms of
brightness through the host computer.
Alternatively, ambient illumination in
the environment can be used to back-light the sample.
Cambits is a versatile modular imaging
system that lets users create a range of
computational cameras. The current
prototype is a proof of concept we use to
demonstrate key aspects of Cambits:
ease of assembly, self-identification,
and diverse functionality. We have thus
shown Cambits can be a powerful platform for computational photography,
enabling users to express their creativity
along several dimensions. An important aspect of Cambits is that it is designed to be an open platform that is
scalable. That design allows users to
add multiple hardware blocks, including structured light sources, multispectral sources, telescopic optical attachments, and even non-imaging sensors
for measuring acceleration, orientation, sound, temperature, and pressure.
We anticipate developing algorithms
that use such a diverse set of sensors to
trigger/control various image-capture-and-processing strategies. To encourage others to modify or build on the current system, we have made the details of
its hardware and software design available at http://www.cs.columbia.edu/
We did this research at the Computer
Vision Laboratory of Columbia University in New York while Makoto Odamaki was a Visiting Scientist from
Ricoh Company, Ltd.; for the design
data covered here, see http://www.
cs.columbia.edu/CAVE/projects/cam-bits. We thank William Miller for designing and 3D printing the chassis of
the Cambits blocks, Wentao Jiang for
his contribution to the user interface,
and Daniel Sims for editing the demonstration video and formatting the
project webpage. Divyansh Agarwal,
Ethan Benjamin, Jihan Li, Shengyi
Lin, and Avinash Nair implemented
several of the computational-photog-raphy algorithms. The authors also
thank Anne Fleming for proofreading
an early draft of the article.