File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: ProLiF: Progressively-connected Light Field network for efficient view synthesis

TitleProLiF: Progressively-connected Light Field network for efficient view synthesis
Authors
KeywordsLight field
Neural rendering
View synthesis
Issue Date1-May-2024
PublisherElsevier
Citation
Computers and Graphics, 2024, v. 120 How to Cite?
AbstractThis paper presents a simple yet practical network architecture, ProLiF (Progressively-connected Light Field network), for the efficient differentiable view synthesis of complex forward-facing scenes in both the training and inference stages. The progress of view synthesis has advanced significantly due to the recent Neural Radiance Fields (NeRF). However, when training a NeRF, hundreds of network evaluations are required to synthesize a single pixel color, which is highly consuming of device memory and time. This issue prevents the differentiable rendering of a large patch of pixels in the training stage for semantic-level supervision, which is critical for many practical applications such as robust scene fitting, style transferring, and adversarial training. On the contrary, our proposed simple architecture ProLiF, encodes a two-plane light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses. To keep the multi-view 3D consistency of the neural light field, we propose a progressive training strategy with novel regularization losses. We demonstrate that ProLiF has good compatibility with LPIPS loss to achieve robustness to varying light conditions, and NNFM loss as well as CLIP loss to edit the rendering style of the scene.
Persistent Identifierhttp://hdl.handle.net/10722/350450

 

DC FieldValueLanguage
dc.contributor.authorWang, Peng-
dc.contributor.authorLiu, Yuan-
dc.contributor.authorLin, Guying-
dc.contributor.authorGu, Jiatao-
dc.contributor.authorLiu, Lingjie-
dc.contributor.authorKomura, Taku-
dc.contributor.authorWang, Wenping-
dc.date.accessioned2024-10-29T00:31:39Z-
dc.date.available2024-10-29T00:31:39Z-
dc.date.issued2024-05-01-
dc.identifier.citationComputers and Graphics, 2024, v. 120-
dc.identifier.urihttp://hdl.handle.net/10722/350450-
dc.description.abstractThis paper presents a simple yet practical network architecture, ProLiF (Progressively-connected Light Field network), for the efficient differentiable view synthesis of complex forward-facing scenes in both the training and inference stages. The progress of view synthesis has advanced significantly due to the recent Neural Radiance Fields (NeRF). However, when training a NeRF, hundreds of network evaluations are required to synthesize a single pixel color, which is highly consuming of device memory and time. This issue prevents the differentiable rendering of a large patch of pixels in the training stage for semantic-level supervision, which is critical for many practical applications such as robust scene fitting, style transferring, and adversarial training. On the contrary, our proposed simple architecture ProLiF, encodes a two-plane light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses. To keep the multi-view 3D consistency of the neural light field, we propose a progressive training strategy with novel regularization losses. We demonstrate that ProLiF has good compatibility with LPIPS loss to achieve robustness to varying light conditions, and NNFM loss as well as CLIP loss to edit the rendering style of the scene.-
dc.languageeng-
dc.publisherElsevier-
dc.relation.ispartofComputers and Graphics-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectLight field-
dc.subjectNeural rendering-
dc.subjectView synthesis-
dc.titleProLiF: Progressively-connected Light Field network for efficient view synthesis-
dc.typeArticle-
dc.identifier.doi10.1016/j.cag.2024.103913-
dc.identifier.scopuseid_2-s2.0-85190786913-
dc.identifier.volume120-
dc.identifier.eissn0097-8493-
dc.identifier.issnl0097-8493-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats