File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks

TitleHyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks
Authors
Keywords3D-aware GAN
hyper-network
style transfer
Issue Date2024
Citation
IEEE Transactions on Circuits and Systems for Video Technology, 2024, v. 34, n. 10, p. 9997-10010 How to Cite?
AbstractPortrait stylization is a long-standing task enabling extensive applications. Although 2D-based methods have made great progress in recent years, real-world applications such as metaverse and games often demand 3D content. On the other hand, the requirement of 3D data, which is costly to acquire, significantly impedes the development of 3D portrait stylization methods. In this paper, inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images, we propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization. At the core of our method is a hyper-network learned to manipulate the parameters of the generator in a single forward pass. It not only offers a strong capacity to handle multiple styles with a single model, but also enables flexible fine-grained stylization that affects only texture, shape, or local part of the portrait. While the use of 3D-aware GANs bypasses the requirement of 3D data, we further alleviate the necessity of style images with the CLIP model being the style guidance. We conduct an extensive set of experiments across the style, attribute, and shape, and meanwhile, measure the 3D consistency. These experiments demonstrate the superior capability of our HyperStyle3D model in rendering 3D-consistent images in diverse styles, deforming the face shape, and editing various attributes. Our project page: https://windlikestone.github.io/HyperStyle3D-website/.
Persistent Identifierhttp://hdl.handle.net/10722/352441
ISSN
2023 Impact Factor: 8.3
2023 SCImago Journal Rankings: 2.299

 

DC FieldValueLanguage
dc.contributor.authorChen, Zhuo-
dc.contributor.authorXu, Xudong-
dc.contributor.authorYan, Yichao-
dc.contributor.authorPan, Ye-
dc.contributor.authorZhu, Wenhan-
dc.contributor.authorWu, Wayne-
dc.contributor.authorDai, Bo-
dc.contributor.authorYang, Xiaokang-
dc.date.accessioned2024-12-16T03:58:58Z-
dc.date.available2024-12-16T03:58:58Z-
dc.date.issued2024-
dc.identifier.citationIEEE Transactions on Circuits and Systems for Video Technology, 2024, v. 34, n. 10, p. 9997-10010-
dc.identifier.issn1051-8215-
dc.identifier.urihttp://hdl.handle.net/10722/352441-
dc.description.abstractPortrait stylization is a long-standing task enabling extensive applications. Although 2D-based methods have made great progress in recent years, real-world applications such as metaverse and games often demand 3D content. On the other hand, the requirement of 3D data, which is costly to acquire, significantly impedes the development of 3D portrait stylization methods. In this paper, inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images, we propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization. At the core of our method is a hyper-network learned to manipulate the parameters of the generator in a single forward pass. It not only offers a strong capacity to handle multiple styles with a single model, but also enables flexible fine-grained stylization that affects only texture, shape, or local part of the portrait. While the use of 3D-aware GANs bypasses the requirement of 3D data, we further alleviate the necessity of style images with the CLIP model being the style guidance. We conduct an extensive set of experiments across the style, attribute, and shape, and meanwhile, measure the 3D consistency. These experiments demonstrate the superior capability of our HyperStyle3D model in rendering 3D-consistent images in diverse styles, deforming the face shape, and editing various attributes. Our project page: https://windlikestone.github.io/HyperStyle3D-website/.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Circuits and Systems for Video Technology-
dc.subject3D-aware GAN-
dc.subjecthyper-network-
dc.subjectstyle transfer-
dc.titleHyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TCSVT.2024.3407135-
dc.identifier.scopuseid_2-s2.0-85194894569-
dc.identifier.volume34-
dc.identifier.issue10-
dc.identifier.spage9997-
dc.identifier.epage10010-
dc.identifier.eissn1558-2205-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats