File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/ICCP51581.2021.9466261
- Scopus: eid_2-s2.0-85114275006
- WOS: WOS:000693440600005
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Depth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation
Title | Depth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation |
---|---|
Authors | |
Keywords | Computational Optics Computational Photography |
Issue Date | 2021 |
Citation | 2021 IEEE International Conference on Computational Photography, ICCP 2021, 2021, article no. 9466261 How to Cite? |
Abstract | Monocular depth estimation remains a challenging problem, despite significant advances in neural network architectures that leverage pictorial depth cues alone. Inspired by depth from defocus and emerging point spread function engineering approaches that optimize programmable optics end-to-end with depth estimation networks, we propose a new and improved framework for depth estimation from a single RGB image using a learned phase-coded aperture. Our optimized aperture design uses rotational symmetry constraints for computational efficiency, and we jointly train the optics and the network using an occlusion-aware image formation model that provides more accurate defocus blur at depth discontinuities than previous techniques do. Using this framework and a custom prototype camera, we demonstrate state-of-the art image and depth estimation quality among end-to-end optimized computational cameras in simulation and experiment. |
Persistent Identifier | http://hdl.handle.net/10722/315355 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ikoma, Hayato | - |
dc.contributor.author | Nguyen, Cindy M. | - |
dc.contributor.author | Metzler, Christopher A. | - |
dc.contributor.author | Peng, Yifan | - |
dc.contributor.author | Wetzstein, Gordon | - |
dc.date.accessioned | 2022-08-05T10:18:35Z | - |
dc.date.available | 2022-08-05T10:18:35Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | 2021 IEEE International Conference on Computational Photography, ICCP 2021, 2021, article no. 9466261 | - |
dc.identifier.uri | http://hdl.handle.net/10722/315355 | - |
dc.description.abstract | Monocular depth estimation remains a challenging problem, despite significant advances in neural network architectures that leverage pictorial depth cues alone. Inspired by depth from defocus and emerging point spread function engineering approaches that optimize programmable optics end-to-end with depth estimation networks, we propose a new and improved framework for depth estimation from a single RGB image using a learned phase-coded aperture. Our optimized aperture design uses rotational symmetry constraints for computational efficiency, and we jointly train the optics and the network using an occlusion-aware image formation model that provides more accurate defocus blur at depth discontinuities than previous techniques do. Using this framework and a custom prototype camera, we demonstrate state-of-the art image and depth estimation quality among end-to-end optimized computational cameras in simulation and experiment. | - |
dc.language | eng | - |
dc.relation.ispartof | 2021 IEEE International Conference on Computational Photography, ICCP 2021 | - |
dc.subject | Computational Optics | - |
dc.subject | Computational Photography | - |
dc.title | Depth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/ICCP51581.2021.9466261 | - |
dc.identifier.scopus | eid_2-s2.0-85114275006 | - |
dc.identifier.spage | article no. 9466261 | - |
dc.identifier.epage | article no. 9466261 | - |
dc.identifier.isi | WOS:000693440600005 | - |