File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Deep optics: Joint design of optics and image recovery algorithms for domain specific cameras

TitleDeep optics: Joint design of optics and image recovery algorithms for domain specific cameras
Authors
Issue Date2020
Citation
ACM SIGGRAPH 2020 Courses, SIGGRAPH 2020, 2020 How to Cite?
AbstractApplication-domain-specific cameras that combine customized optics with modern image recovery algorithms are of rapidly growing interest, with widespread applications like ultrathin cameras for internet-of-things or drones, as well as computational cameras for microscopy and scientific imaging. Existing approaches of designing imaging optics are either heuristic or use some proxy metric on the point spread function rather than considering the image quality after post-processing. Without a true end-to-end flow of joint optimization, it remains elusive to find an optimal computational camera for a given visual task. Although this joint design concept has been the core idea of computational photography for a long time, but that only nowadays the computational tools are ready to efficiently interpret a true end-to-end imaging process via machine learning advances. We describe the use of diffractive optics to enable lenses not only showing the compact physical appearance, but also flexible and large design degree of freedom. By building a differentiable ray or wave optics simulation model that maps the true source image to the reconstructed one, one can jointly train an optical encoder and electronic decoder. The encoder can be parameterized by the PSF of physical optics, and the decoder a convolutional neural network. By running over a broad set of images and defining domain-specific loss functions, parameters of the optics and image processing algorithms are jointly learned. We describe typical photography applications for extended depth-of-field, large field-of-view, and high-dynamic-range imaging. We also describe the generalization of this joint-design to machine vision and scientific imaging scenarios. To this point, we describe an end-to-end learned, optically coded super-resolution SPAD camera, and a hybrid optical-electronic convolutional layer based optimization of optics for image classification. Additionally, we explore lensless imaging with optimized phase masks for realizing an ultra-thin camera, a high-resolution wavefront sensing, and face detection.
Persistent Identifierhttp://hdl.handle.net/10722/315335

 

DC FieldValueLanguage
dc.contributor.authorPeng, Yifan Evan-
dc.contributor.authorVeeraraghavan, Ashok-
dc.contributor.authorHeidrich, Wolfgang-
dc.contributor.authorWetzstein, Gordon-
dc.date.accessioned2022-08-05T10:18:30Z-
dc.date.available2022-08-05T10:18:30Z-
dc.date.issued2020-
dc.identifier.citationACM SIGGRAPH 2020 Courses, SIGGRAPH 2020, 2020-
dc.identifier.urihttp://hdl.handle.net/10722/315335-
dc.description.abstractApplication-domain-specific cameras that combine customized optics with modern image recovery algorithms are of rapidly growing interest, with widespread applications like ultrathin cameras for internet-of-things or drones, as well as computational cameras for microscopy and scientific imaging. Existing approaches of designing imaging optics are either heuristic or use some proxy metric on the point spread function rather than considering the image quality after post-processing. Without a true end-to-end flow of joint optimization, it remains elusive to find an optimal computational camera for a given visual task. Although this joint design concept has been the core idea of computational photography for a long time, but that only nowadays the computational tools are ready to efficiently interpret a true end-to-end imaging process via machine learning advances. We describe the use of diffractive optics to enable lenses not only showing the compact physical appearance, but also flexible and large design degree of freedom. By building a differentiable ray or wave optics simulation model that maps the true source image to the reconstructed one, one can jointly train an optical encoder and electronic decoder. The encoder can be parameterized by the PSF of physical optics, and the decoder a convolutional neural network. By running over a broad set of images and defining domain-specific loss functions, parameters of the optics and image processing algorithms are jointly learned. We describe typical photography applications for extended depth-of-field, large field-of-view, and high-dynamic-range imaging. We also describe the generalization of this joint-design to machine vision and scientific imaging scenarios. To this point, we describe an end-to-end learned, optically coded super-resolution SPAD camera, and a hybrid optical-electronic convolutional layer based optimization of optics for image classification. Additionally, we explore lensless imaging with optimized phase masks for realizing an ultra-thin camera, a high-resolution wavefront sensing, and face detection.-
dc.languageeng-
dc.relation.ispartofACM SIGGRAPH 2020 Courses, SIGGRAPH 2020-
dc.titleDeep optics: Joint design of optics and image recovery algorithms for domain specific cameras-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3388769.3407486-
dc.identifier.scopuseid_2-s2.0-85091995181-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats