Computational photography has become an increasingly active area of research within the computer vision community. Within the few last years, the amount of research has grown tremendously with dozens of published papers per year in a variety of vision, optics, and graphics venues. Over the last few years, a similar trend can be seen in the emerging field of computational displays-spurred by the widespread availability of precise optical and material fabrication technologies; the research community has begun to investigate the joint design of display optics and computational processing. Such displays are not only designed for human observers but also for computer vision applications, providing high-dimensional structured illumination that varies in space, time, angle, and the color spectrum. This workshop is designed to unite the computational camera and display communities and in that it considers to what degree concepts from computational cameras can inform the design of emerging computational displays and vice versa, both focused on applications in computer vision.
| 8:30 - 8:45
|Welcome / Opening Remarks
| 8:45 - 9:45
|Keynote 1: Visible Light Tomography in Computer Graphics
Wolfgang Heidrich (University of British Columbia)
| 9:45 - 10:00
|Papers and Posters Fast Forward
| 10:00 - 10:30
|— Morning Break —
| 10:30 - 11:30
|Papers Session 1
Recovering Spectral Reflectance under Commonly Available Lighting Conditions
Jun Jiang and Jinwei Gu
Geometry-Corrected Light Field Rendering for Creating a Holographic Stereogram
Joel Jurik, Thomas Burnett, Michael Klug, and Paul Debevec
Capturing Relightable Images using Computer Monitors
Prabath Gunawardane, Steven Scher, and James Davis
| 11:30 - 12:30
|Keynote 2: Cool Stuff with Perception
Marty Banks (University of California, Berkeley)
| 12:30- 13:30
|— Lunch Break —
| 13:30 - 14:10
|Papers Session 2
Light Field Processing Using GMM Patch Prior
Kaushik Mitra and Ashok Veeraraghavan
A Kaleidoscopic Approach to Surround Geometry and Reflectance Acquisition
Ivo Ihrke, Ilya Reshetouski, Alkhazur Manakov, Art Tevs, Michael Wand, and Hans-Peter Seidel
| 14:50 - 15:30
Photometric Modeling for Active Scenes
Wenjia Yuan, and Kristin Dana
Spatio-Temporal Mixing to Increase Intensity Resolution on a Single Display
Pawan Harish, Parikshit Sakurikar, and P.J. Narayanan
Motion-Invariant Coding Using a Programmble Apertrue Camera
Toshiki Sonoda, Hajime Nagahara, and Rin-ichiro Taniguchi
Personal to Shared Moments with Angled Graphs of Pictures
Aydin Arpa, Otkrist Gupta, Gabriel Taubin, Rahul Sukthankar, and Ramesh Raskar
Focal Length Modulation of Projection Lens for Defocus Blur Compensation
David Samuel, Daisuke Iwai, and Kosuke Sato
The Parabolic Multi-Mirror Camera
Stephan Wenger, Stefan John, and Marcus Magnor
Low-Power Mobile LCD Displays using Backlight Dimming with 2D Gradient Histogram Equalization
Steven Scher, Dick McCartney, and James Davis
Dynamic Reflectance Control of Photochromic Compounds for 3D High Dynamic Range Display
Naoto Hino, Daisuke Iwai, and Kosuke Sato
| 15:00- 15:30
|— Afternoon Break —
| 15:30 - 16:30
| Keynote 3: Light-field Displays in Perspective
Michael Klug (CTO of Zebra Imaging)
16:30 - 16:45
|Closing Remarks and Best Paper Award
Computational cameras and displays exploit the co-design of optical elements and computational processing to push the boundaries of conventional light acquisition and display systems. Submissions on all aspects of computational cameras and displays, especially exploring their duality, are encouraged:
Duality of Cameras and Displays:
Apply concepts and innovations developed for cameras to displays or the other way around, such as compressive displays or extended depth of field projection.
Explore imaging systems that combine light acquisition and display. Examples include projector-camera systems, light-sensitive displays, and unconventional camera flashlights.
Theoretical Analysis of Computational Cameras and Displays:
Establishing fundamental limits of different aspects of computational cameras and displays. New approaches to data processing, for instance using machine learning or compressive sensing.
Exotic Camera and Display Technologies:
Design of unconventional optics, sensors, light-emitters, or computational processing for imaging systems. Examples include systems inspired by biological visual systems, omni-vision displays, and single pixel cameras.
Perceptual Aspects of Computational Displays:
|Computational models for human perception, perceptually-driven computational displays, and all aspects of the human visual system relevant to computational displays.
Natural Image Statistics for Computational Cameras:
Mathematical priors that are exploited in the design of computational cameras, high-dimensional priors for natural scenes including the color spectrum, spatial, temporal, and directional light variation.
Visible Light Tomography in Computer Graphics .
Wolfgang Heidrich . University of British Columbia
Tomographic methods are the standard approach for obtaining volumetric measurements in medicine, science, and engineering. Typical tomography setups acquire 2D X-ray images of an object, and reconstruct a 3D voxel representation from this data. Unfortunately, for many applications in computer graphics, such X-ray setups are not feasible due to cost and/or safety concerns. In this presentation, I will introduce our recent work on visible light tomography, which has much more modest hardware requirements. I will discuss tomographic methods in the presence of refraction, and show applications to the scanning of transparent objects, and the capture of gas flows. I will also discuss first results on a new solver for generic tomography problems.
Professor Wolfgang Heidrich holds the Dolby Research Chair in Computer Science at the University of British Columbia. He received a PhD in Computer Science from the University of Erlangen in 1999, and then worked as a Research Associate in the Computer Graphics Group of the Max-Planck-Institute for Computer Science in Saarbrucken, Germany, before joining UBC in 2000. Heidrich's research interests lie at the intersection of computer graphics, computer vision, imaging, and optics. In particular, he has worked on High Dynamic Range imaging and display, image-based modeling, measuring, and rendering, geometry acquisition, GPU-based rendering, and global illumination. Heidrich has written over 100 refereed publications on these subjects and has served on numerous program committees. He was the program co-chair for Graphics Hardware 2002, Graphics Interface 2004, and the Eurographics Symposium on Rendering, 2006.
Marty Banks . UC Berkeley Vision Science
Marty Banks attended Occidental College in Los Angeles majoring in Psychology and minoring in Physics. He received a BA in Psychology in 1970 and spent a year teaching in the German school system in Ludwigsburg, Germany. He then attended UC San Diego and received an MS in Experimental Psychology in 1973. He then attended the University of Minnesota receiving a PhD in Developmental Psychology in 1976. Marty Banks was Assistant and Associate Professor of Psychology at the University of Texas at Austin from 1976 to 1985. He then moved to UC Berkeley where he has been Associate and Full Professor of Optometry and Vision Science. He has joint appointments in Psychology, Neuroscience, and Bioengineering. He was the chair of Vision Science from 1996-2002. Prof. Banks has received a number of awards including Fellow of the American Association for the Advancement of Science, Fellow of the American Psychological Society, and Koffka Medal Awardee.
Light-field Displays in Perspective .
Michael Klug . Zebra Imaging . CTO & Co-Founder
Light-field 3D displays have been explored in many forms over the past 100+ years, from integral and lenticular photography and parallax panoramagrams to “holographic element” or “hogel”-based displays and multi-layer automultiscopic displays. This talk will consider some of the history of light-field displays, particularly in the context of holographic and quasi-holographic encoding, and contemporary advances, enabled by smaller updatable pixels, more sophisticated computational display algorithms, and the growing ubiquity and speed of computational hardware. We’ll look at examples of holographic light-field displays, consider initial successful applications, and contemplate challenges and some creative ideas for light-field display content creation, including the notion of the display as a capture device.
Michael Klug received a Bachelor of Science degree from MIT in 1989 and a Master of Science degree from the Spatial Imaging Group at the MIT Media Lab in 1991. From 1991 until 1997, Michael worked as a Research Scientist at the MIT Media Laboratory, focusing on design and development of 3D displays and holographic systems. During his tenure at MIT, he was responsible for basic technology development, prototype construction and demonstration, proposal generation and program budgeting and renewal, sponsor recruitment, coordination, presentations and demonstrations, and laboratory resource management and establishment of group research goals. He developed the basic technological predecessors of the systems now productized by Zebra Imaging and has served as a consultant to various companies, such as Polaroid Corporation, in related fields. He is recognized internationally as one of a handful of experts in the field of automated hologram printing technology. At Zebra Imaging, Michael is responsible for overall technology strategy, intellectual property development and management, and integration of R&D, business development, market research, and technology and product trajectories.
Please refer to the following files for detailed formatting instructions.
A complete paper should be submitted using the above template, which is blind-submission review-formatted templates. The length should match that intended for final publication. Papers accepted for the conference will be allocated up to 8 pages.
Authors may optionally upload supplementary material, which may not fit in the PDF size limit and may include:
videos to showcase results/demo of the proposed approach/system,
images and other results in addition to the ones in the paper,
anonymized related submissions to other conferences and journals, and
appendices or technical reports containing extended proofs and mathematical derivations that are not essential to the understanding of the submitted paper.
We encourage authors to submit videos using an MP4 codec such as DivX contained in an AVI. Also, please submit a README text file with each video specifying the exact codec used and a URL where the codec can be downloaded.
The authors should refer to the contents of the supplementary material appropriately in the paper. Note that reviewers will be encouraged to look at it, but are not obligated to do so. Please note that:
All supplementary material must be zipped into a single file. Alternatively, you can choose to upload a PDF file containing any non-video item listed above. CMT imposes a 10MB limit on the size of this file. Note that you can update the file by uploading a new one (the old one will be deleted and replaced).
The paper for review (PDF only) must be submitted first before the supplementary material (PDF or ZIP only) can be submitted.
Please make sure that the supplementary material directly supports the paper as submitted prior to the paper deadline. ONLY results generated by the algorithm/approach/system reported in the submitted version are allowed. Material based on improvements subsequent to the paper deadline is not allowed.
Do not submit a newer version of the paper as supplementary material. A newer version of the paper or portion thereof, with description of an improved algorithm/approach/system or even one spelling or typo correction, is not allowed.
Paper submission and review site:
https://cmt.research.microsoft.com/CCD2012/ (bookmark or save this URL!)
Please add "firstname.lastname@example.org" to your list of safe senders (whitelist) to prevent important email announcements from being blocked by spam filters.
If you have been invited to review for this workshop, an account has been automatically generated for you using the contact email as your account name (regardless of whether you agreed to review or not). You need to only request for a new password via "Reset your password". If you have agreed to review, please follow the reviewer login instructions.
If you have not been invited to review for this workshop, you are not in the system. Please sign up as a new user. Since this system is new for CVPR, please read instructions carefully. If you have generated an account and have forgotten your password, just click on "Reset your password". Instructions will be emailed to you.
Logging in the first time:
When you log in for the first time, you will be asked to enter your conflict domain information. You will not be able to submit any paper without entering this information. We need to ensure conflict-free reviewing of all papers.
Update contact information:
At any time, you can edit your contact information (see item near the top right in the submission site). Don't forget to click the "Update" button to save the edited information. If you wish to change the contact email address, you can modify it via the "Change your Email" box.
Enter subject (topic) areas for your paper:
When you submit a paper, you will be asked to specify its associated subject areas. Please note that you indicate only one "primary" subject area and any number of "secondary" subject areas. Please pay extra attention in selecting your subject areas, as this information is critical in allowing us to properly assign papers to area chairs and reviewers. Caution: you cannot pick the "primary" subject area as a "secondary" subject area; if you do this, the system will not allow you to save. For example, if you had picked "Face and Gesture" as the "primary" area, you cannot pick "Face and Gesture" as a "secondary" area.
Once you have registered your paper (i.e. title/authors), you will be assigned a paper number. Insert this into the latex or word template before generating the pdf of your paper for submission. Papers submitted without a number may not be reviewed.
The maximum size of the abstract is 4000 characters.
The paper must be PDF only (maximum 15MB).
The supplementary material can be either PDF or ZIP only (maximum 30MB).
If your submission has co-authors, please make sure that you enter their email addresses that correspond exactly to their account names (assuming they have created accounts). This will ensure that your co-authors can see your submission when they log in. Co-authors must also have their conflict domains entered.
Gordon Wetzstein, MIT Media Lab
Douglas Lanman, MIT Media Lab
Ramesh Raskar, MIT Media Lab
Kyros Kutulakos, University of Toronto
Kari Pulli, NVidia Research
Ivo Ihrke, Saarland University / MPI Informatik
Amit Agrawal, MERL
Ashok Veeraraghavan, Rice University
Srinivas Narasimhan, Carnegie Mellon University
David Brady, Duke University
Gregg Favalora, Optics for Hire
Wolfgang Heidrich, University of British Columbia
Wojciech Matusik, MIT CSAIL
Hendrik Lensch, Ulm University
Diego Gutierrez, Universidad de Zaragoza
Mark Lucente, Stellarray, Inc.
Todd Zickler, Harvard University
Matthew Hirsch, MIT Media Lab
Abhijeet Ghosh, USC ICT
Oliver Bimber, Johannes Keppler University Linz
Matthew Trentacoste, University of British Columbia
Oliver Cossairt, Columbia University
Matthew O'Toole, University of Toronto
Yosuke Bando, Toshiba
Michael Bove, MIT Media Lab