File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis

TitleA Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis
Authors
Issue Date2021
Citation
Advances in Neural Information Processing Systems, 2021, v. 24, p. 20002-20013 How to Cite?
AbstractThe advancement of generative radiance fields has pushed the boundary of 3Daware image synthesis. Motivated by the observation that a 3D object should look realistic from multiple viewpoints, these methods introduce a multi-view constraint as regularization to learn valid 3D radiance fields from 2D images. Despite the progress, they often fall short of capturing accurate 3D shapes due to the shapecolor ambiguity, limiting their applicability in downstream tasks. In this work, we address this ambiguity by proposing a novel shading-guided generative implicit model that is able to learn a starkly improved shape representation. Our key insight is that an accurate 3D shape should also yield a realistic rendering under different lighting conditions. This multi-lighting constraint is realized by modeling illumination explicitly and performing shading with various lighting conditions. Gradients are derived by feeding the synthesized images to a discriminator. To compensate for the additional computational burden of calculating surface normals, we further devise an efficient volume rendering strategy via surface tracking, reducing the training and inference time by 24% and 48%, respectively. Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis while capturing accurate underlying 3D shapes. We demonstrate improved performance of our approach on 3D shape reconstruction against existing methods, and show its applicability on image relighting.
Persistent Identifierhttp://hdl.handle.net/10722/352279
ISSN
2020 SCImago Journal Rankings: 1.399

 

DC FieldValueLanguage
dc.contributor.authorPan, Xingang-
dc.contributor.authorXu, Xudong-
dc.contributor.authorLoy, Chen Change-
dc.contributor.authorTheobalt, Christian-
dc.contributor.authorDai, Bo-
dc.date.accessioned2024-12-16T03:57:45Z-
dc.date.available2024-12-16T03:57:45Z-
dc.date.issued2021-
dc.identifier.citationAdvances in Neural Information Processing Systems, 2021, v. 24, p. 20002-20013-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/352279-
dc.description.abstractThe advancement of generative radiance fields has pushed the boundary of 3Daware image synthesis. Motivated by the observation that a 3D object should look realistic from multiple viewpoints, these methods introduce a multi-view constraint as regularization to learn valid 3D radiance fields from 2D images. Despite the progress, they often fall short of capturing accurate 3D shapes due to the shapecolor ambiguity, limiting their applicability in downstream tasks. In this work, we address this ambiguity by proposing a novel shading-guided generative implicit model that is able to learn a starkly improved shape representation. Our key insight is that an accurate 3D shape should also yield a realistic rendering under different lighting conditions. This multi-lighting constraint is realized by modeling illumination explicitly and performing shading with various lighting conditions. Gradients are derived by feeding the synthesized images to a discriminator. To compensate for the additional computational burden of calculating surface normals, we further devise an efficient volume rendering strategy via surface tracking, reducing the training and inference time by 24% and 48%, respectively. Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis while capturing accurate underlying 3D shapes. We demonstrate improved performance of our approach on 3D shape reconstruction against existing methods, and show its applicability on image relighting.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems-
dc.titleA Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85127267871-
dc.identifier.volume24-
dc.identifier.spage20002-
dc.identifier.epage20013-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats