File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1016/j.cag.2024.103913
- Scopus: eid_2-s2.0-85190786913
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: ProLiF: Progressively-connected Light Field network for efficient view synthesis
Title | ProLiF: Progressively-connected Light Field network for efficient view synthesis |
---|---|
Authors | |
Keywords | Light field Neural rendering View synthesis |
Issue Date | 1-May-2024 |
Publisher | Elsevier |
Citation | Computers and Graphics, 2024, v. 120 How to Cite? |
Abstract | This paper presents a simple yet practical network architecture, ProLiF (Progressively-connected Light Field network), for the efficient differentiable view synthesis of complex forward-facing scenes in both the training and inference stages. The progress of view synthesis has advanced significantly due to the recent Neural Radiance Fields (NeRF). However, when training a NeRF, hundreds of network evaluations are required to synthesize a single pixel color, which is highly consuming of device memory and time. This issue prevents the differentiable rendering of a large patch of pixels in the training stage for semantic-level supervision, which is critical for many practical applications such as robust scene fitting, style transferring, and adversarial training. On the contrary, our proposed simple architecture ProLiF, encodes a two-plane light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses. To keep the multi-view 3D consistency of the neural light field, we propose a progressive training strategy with novel regularization losses. We demonstrate that ProLiF has good compatibility with LPIPS loss to achieve robustness to varying light conditions, and NNFM loss as well as CLIP loss to edit the rendering style of the scene. |
Persistent Identifier | http://hdl.handle.net/10722/350450 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wang, Peng | - |
dc.contributor.author | Liu, Yuan | - |
dc.contributor.author | Lin, Guying | - |
dc.contributor.author | Gu, Jiatao | - |
dc.contributor.author | Liu, Lingjie | - |
dc.contributor.author | Komura, Taku | - |
dc.contributor.author | Wang, Wenping | - |
dc.date.accessioned | 2024-10-29T00:31:39Z | - |
dc.date.available | 2024-10-29T00:31:39Z | - |
dc.date.issued | 2024-05-01 | - |
dc.identifier.citation | Computers and Graphics, 2024, v. 120 | - |
dc.identifier.uri | http://hdl.handle.net/10722/350450 | - |
dc.description.abstract | This paper presents a simple yet practical network architecture, ProLiF (Progressively-connected Light Field network), for the efficient differentiable view synthesis of complex forward-facing scenes in both the training and inference stages. The progress of view synthesis has advanced significantly due to the recent Neural Radiance Fields (NeRF). However, when training a NeRF, hundreds of network evaluations are required to synthesize a single pixel color, which is highly consuming of device memory and time. This issue prevents the differentiable rendering of a large patch of pixels in the training stage for semantic-level supervision, which is critical for many practical applications such as robust scene fitting, style transferring, and adversarial training. On the contrary, our proposed simple architecture ProLiF, encodes a two-plane light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses. To keep the multi-view 3D consistency of the neural light field, we propose a progressive training strategy with novel regularization losses. We demonstrate that ProLiF has good compatibility with LPIPS loss to achieve robustness to varying light conditions, and NNFM loss as well as CLIP loss to edit the rendering style of the scene. | - |
dc.language | eng | - |
dc.publisher | Elsevier | - |
dc.relation.ispartof | Computers and Graphics | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject | Light field | - |
dc.subject | Neural rendering | - |
dc.subject | View synthesis | - |
dc.title | ProLiF: Progressively-connected Light Field network for efficient view synthesis | - |
dc.type | Article | - |
dc.identifier.doi | 10.1016/j.cag.2024.103913 | - |
dc.identifier.scopus | eid_2-s2.0-85190786913 | - |
dc.identifier.volume | 120 | - |
dc.identifier.eissn | 0097-8493 | - |
dc.identifier.issnl | 0097-8493 | - |