File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/3DV57658.2022.00048
- Scopus: eid_2-s2.0-85146890222
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Dual-Space NeRF: Learning Animatable Avatars and Scene Lighting in Separate Spaces
Title | Dual-Space NeRF: Learning Animatable Avatars and Scene Lighting in Separate Spaces |
---|---|
Authors | |
Issue Date | 2022 |
Citation | Proceedings - 2022 International Conference on 3D Vision, 3DV 2022, 2022, p. 363-372 How to Cite? |
Abstract | Modeling the human body in a canonical space is a common practice for capturing and animation. But when involving the neural radiance field (NeRF), learning a static NeRF in the canonical space is not enough because the lighting of the body changes when the person moves even though the scene lighting is constant. Previous methods alleviate the inconsistency of lighting by learning a per-frame em-bedding, but this operation does not generalize to unseen poses. Given that the lighting condition is static in the world space while the human body is consistent in the canonical space, we propose a dual-space NeRF that models the scene lighting and the human body with two MLPs in two separate spaces. To bridge these two spaces, previous methods mostly rely on the linear blend skinning (LBS) algorithm. However, the blending weights for LBS of a dynamic neural field are intractable and thus are usually memorized with another MLP, which does not generalize to novel poses. Although it is possible to borrow the blending weights of a parametric mesh such as SMPL, the interpolation operation introduces more artifacts. In this paper, we propose to use the barycentric mapping, which can directly generalize to unseen poses and surprisingly achieves superior results than LBS with neural blending weights. Quantitative and qualitative results on the Human3.6M and the ZJU-MoCap datasets show the effectiveness of our method. Our code is available at: https://github.com/zyhbili/Dual-Space-NeRF. |
Persistent Identifier | http://hdl.handle.net/10722/345305 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhi, Yihao | - |
dc.contributor.author | Qian, Shenhan | - |
dc.contributor.author | Yan, Xinhao | - |
dc.contributor.author | Gao, Shenghua | - |
dc.date.accessioned | 2024-08-15T09:26:31Z | - |
dc.date.available | 2024-08-15T09:26:31Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | Proceedings - 2022 International Conference on 3D Vision, 3DV 2022, 2022, p. 363-372 | - |
dc.identifier.uri | http://hdl.handle.net/10722/345305 | - |
dc.description.abstract | Modeling the human body in a canonical space is a common practice for capturing and animation. But when involving the neural radiance field (NeRF), learning a static NeRF in the canonical space is not enough because the lighting of the body changes when the person moves even though the scene lighting is constant. Previous methods alleviate the inconsistency of lighting by learning a per-frame em-bedding, but this operation does not generalize to unseen poses. Given that the lighting condition is static in the world space while the human body is consistent in the canonical space, we propose a dual-space NeRF that models the scene lighting and the human body with two MLPs in two separate spaces. To bridge these two spaces, previous methods mostly rely on the linear blend skinning (LBS) algorithm. However, the blending weights for LBS of a dynamic neural field are intractable and thus are usually memorized with another MLP, which does not generalize to novel poses. Although it is possible to borrow the blending weights of a parametric mesh such as SMPL, the interpolation operation introduces more artifacts. In this paper, we propose to use the barycentric mapping, which can directly generalize to unseen poses and surprisingly achieves superior results than LBS with neural blending weights. Quantitative and qualitative results on the Human3.6M and the ZJU-MoCap datasets show the effectiveness of our method. Our code is available at: https://github.com/zyhbili/Dual-Space-NeRF. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings - 2022 International Conference on 3D Vision, 3DV 2022 | - |
dc.title | Dual-Space NeRF: Learning Animatable Avatars and Scene Lighting in Separate Spaces | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/3DV57658.2022.00048 | - |
dc.identifier.scopus | eid_2-s2.0-85146890222 | - |
dc.identifier.spage | 363 | - |
dc.identifier.epage | 372 | - |