File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis

TitleLiquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis
Authors
Keywordsappearance transfer
generative adversarial network
Human image synthesis
motion imitation
novel view synthesis
Issue Date2022
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, v. 44, n. 9, p. 5114-5131 How to Cite?
AbstractWe tackle human image synthesis, including human motion imitation, appearance transfer, and novel view synthesis, within a unified framework. It means that the model, once being trained, can be used to handle all these tasks. The existing task-specific methods mainly use 2D keypoints (pose) to estimate the human body structure. However, they only express the position information with no ability to characterize the personalized shape of the person and model the limb rotations. In this paper, we propose to use a 3D body mesh recovery module to disentangle the pose and shape. It can not only model the joint location and rotation but also characterize the personalized body shape. To preserve the source information, such as texture, style, color, and face identity, we propose an Attentional Liquid Warping GAN with Attentional Liquid Warping Block (AttLWB) that propagates the source information in both image and feature spaces to the synthesized reference. Specifically, the source features are extracted by a denoising convolutional auto-encoder for characterizing the source identity well. Furthermore, our proposed method can support a more flexible warping from multiple sources. To further improve the generalization ability of the unseen source images, a one/few-shot adversarial learning is applied. In detail, it first trains a model in an extensive training set. Then, it finetunes the model by one/few-shot unseen image(s) in a self-supervised way to generate high-resolution (512 512512×512 and 1024 \times 10241024×1024) results. Also, we build a new dataset, namely Impersonator (iPER) dataset, for the evaluation of human motion imitation, appearance transfer, and novel view synthesis. Extensive experiments demonstrate the effectiveness of our methods in terms of preserving face identity, shape consistency, and clothes details. All codes and dataset are available on https://impersonator.org/work/impersonator-plus-plus.html.
Persistent Identifierhttp://hdl.handle.net/10722/345032
ISSN
2023 Impact Factor: 20.8
2023 SCImago Journal Rankings: 6.158

 

DC FieldValueLanguage
dc.contributor.authorLiu, Wen-
dc.contributor.authorPiao, Zhixin-
dc.contributor.authorTu, Zhi-
dc.contributor.authorLuo, Wenhan-
dc.contributor.authorMa, Lin-
dc.contributor.authorGao, Shenghua-
dc.date.accessioned2024-08-15T09:24:47Z-
dc.date.available2024-08-15T09:24:47Z-
dc.date.issued2022-
dc.identifier.citationIEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, v. 44, n. 9, p. 5114-5131-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10722/345032-
dc.description.abstractWe tackle human image synthesis, including human motion imitation, appearance transfer, and novel view synthesis, within a unified framework. It means that the model, once being trained, can be used to handle all these tasks. The existing task-specific methods mainly use 2D keypoints (pose) to estimate the human body structure. However, they only express the position information with no ability to characterize the personalized shape of the person and model the limb rotations. In this paper, we propose to use a 3D body mesh recovery module to disentangle the pose and shape. It can not only model the joint location and rotation but also characterize the personalized body shape. To preserve the source information, such as texture, style, color, and face identity, we propose an Attentional Liquid Warping GAN with Attentional Liquid Warping Block (AttLWB) that propagates the source information in both image and feature spaces to the synthesized reference. Specifically, the source features are extracted by a denoising convolutional auto-encoder for characterizing the source identity well. Furthermore, our proposed method can support a more flexible warping from multiple sources. To further improve the generalization ability of the unseen source images, a one/few-shot adversarial learning is applied. In detail, it first trains a model in an extensive training set. Then, it finetunes the model by one/few-shot unseen image(s) in a self-supervised way to generate high-resolution (512 512512×512 and 1024 \times 10241024×1024) results. Also, we build a new dataset, namely Impersonator (iPER) dataset, for the evaluation of human motion imitation, appearance transfer, and novel view synthesis. Extensive experiments demonstrate the effectiveness of our methods in terms of preserving face identity, shape consistency, and clothes details. All codes and dataset are available on https://impersonator.org/work/impersonator-plus-plus.html.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Pattern Analysis and Machine Intelligence-
dc.subjectappearance transfer-
dc.subjectgenerative adversarial network-
dc.subjectHuman image synthesis-
dc.subjectmotion imitation-
dc.subjectnovel view synthesis-
dc.titleLiquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TPAMI.2021.3078270-
dc.identifier.pmid33961551-
dc.identifier.scopuseid_2-s2.0-85105882963-
dc.identifier.volume44-
dc.identifier.issue9-
dc.identifier.spage5114-
dc.identifier.epage5131-
dc.identifier.eissn1939-3539-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats