File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Depth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation

TitleDepth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation
Authors
KeywordsComputational Optics
Computational Photography
Issue Date2021
Citation
2021 IEEE International Conference on Computational Photography, ICCP 2021, 2021, article no. 9466261 How to Cite?
AbstractMonocular depth estimation remains a challenging problem, despite significant advances in neural network architectures that leverage pictorial depth cues alone. Inspired by depth from defocus and emerging point spread function engineering approaches that optimize programmable optics end-to-end with depth estimation networks, we propose a new and improved framework for depth estimation from a single RGB image using a learned phase-coded aperture. Our optimized aperture design uses rotational symmetry constraints for computational efficiency, and we jointly train the optics and the network using an occlusion-aware image formation model that provides more accurate defocus blur at depth discontinuities than previous techniques do. Using this framework and a custom prototype camera, we demonstrate state-of-the art image and depth estimation quality among end-to-end optimized computational cameras in simulation and experiment.
Persistent Identifierhttp://hdl.handle.net/10722/315355
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorIkoma, Hayato-
dc.contributor.authorNguyen, Cindy M.-
dc.contributor.authorMetzler, Christopher A.-
dc.contributor.authorPeng, Yifan-
dc.contributor.authorWetzstein, Gordon-
dc.date.accessioned2022-08-05T10:18:35Z-
dc.date.available2022-08-05T10:18:35Z-
dc.date.issued2021-
dc.identifier.citation2021 IEEE International Conference on Computational Photography, ICCP 2021, 2021, article no. 9466261-
dc.identifier.urihttp://hdl.handle.net/10722/315355-
dc.description.abstractMonocular depth estimation remains a challenging problem, despite significant advances in neural network architectures that leverage pictorial depth cues alone. Inspired by depth from defocus and emerging point spread function engineering approaches that optimize programmable optics end-to-end with depth estimation networks, we propose a new and improved framework for depth estimation from a single RGB image using a learned phase-coded aperture. Our optimized aperture design uses rotational symmetry constraints for computational efficiency, and we jointly train the optics and the network using an occlusion-aware image formation model that provides more accurate defocus blur at depth discontinuities than previous techniques do. Using this framework and a custom prototype camera, we demonstrate state-of-the art image and depth estimation quality among end-to-end optimized computational cameras in simulation and experiment.-
dc.languageeng-
dc.relation.ispartof2021 IEEE International Conference on Computational Photography, ICCP 2021-
dc.subjectComputational Optics-
dc.subjectComputational Photography-
dc.titleDepth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCP51581.2021.9466261-
dc.identifier.scopuseid_2-s2.0-85114275006-
dc.identifier.spagearticle no. 9466261-
dc.identifier.epagearticle no. 9466261-
dc.identifier.isiWOS:000693440600005-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats