File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Spatial and Angular Reconstruction of Light Field Based on Deep Generative Networks

TitleSpatial and Angular Reconstruction of Light Field Based on Deep Generative Networks
Authors
KeywordsLight field reconstruction
generative adversarial networks
computational imaging
high-dimensional convolution
deep learning
Issue Date2019
PublisherIEEE. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000349
Citation
2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22-25 September 2019, p. 4659-4663 How to Cite?
AbstractLight field (LF) cameras often have significant limitations in spatial and angular resolutions due to their design. Many techniques that attempt to reconstruct LF images at a higher resolution only consider either spatial or angular resolution, but not both. We propose a generative network using high-dimensional convolution to improve both aspects. Our experimental results on both synthetic and real-world data demonstrate that the proposed model outperforms existing state-of-the-art methods in terms of both peak signal-to-noise ratio (PSNR) and visual quality. The proposed method can also generate more realistic spatial details with better fidelity.
Persistent Identifierhttp://hdl.handle.net/10722/288222
ISSN
2020 SCImago Journal Rankings: 0.315

 

DC FieldValueLanguage
dc.contributor.authorMeng, N-
dc.contributor.authorZeng, T-
dc.contributor.authorLam, EYM-
dc.date.accessioned2020-10-05T12:09:41Z-
dc.date.available2020-10-05T12:09:41Z-
dc.date.issued2019-
dc.identifier.citation2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22-25 September 2019, p. 4659-4663-
dc.identifier.issn1522-4880-
dc.identifier.urihttp://hdl.handle.net/10722/288222-
dc.description.abstractLight field (LF) cameras often have significant limitations in spatial and angular resolutions due to their design. Many techniques that attempt to reconstruct LF images at a higher resolution only consider either spatial or angular resolution, but not both. We propose a generative network using high-dimensional convolution to improve both aspects. Our experimental results on both synthetic and real-world data demonstrate that the proposed model outperforms existing state-of-the-art methods in terms of both peak signal-to-noise ratio (PSNR) and visual quality. The proposed method can also generate more realistic spatial details with better fidelity.-
dc.languageeng-
dc.publisherIEEE. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000349-
dc.relation.ispartofInternational Conference on Image Processing Proceedings-
dc.rightsInternational Conference on Image Processing Proceedings. Copyright © IEEE.-
dc.rights©2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectLight field reconstruction-
dc.subjectgenerative adversarial networks-
dc.subjectcomputational imaging-
dc.subjecthigh-dimensional convolution-
dc.subjectdeep learning-
dc.titleSpatial and Angular Reconstruction of Light Field Based on Deep Generative Networks-
dc.typeConference_Paper-
dc.identifier.emailLam, EYM: elam@eee.hku.hk-
dc.identifier.authorityLam, EYM=rp00131-
dc.description.naturepostprint-
dc.identifier.doi10.1109/ICIP.2019.8803480-
dc.identifier.scopuseid_2-s2.0-85076818682-
dc.identifier.hkuros314918-
dc.identifier.spage4659-
dc.identifier.epage4663-
dc.publisher.placeUnited States-
dc.identifier.issnl1522-4880-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats