File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Liquid warping GAN: A unified framework for human motion imitation, appearance transfer and novel view synthesis

TitleLiquid warping GAN: A unified framework for human motion imitation, appearance transfer and novel view synthesis
Authors
Issue Date2019
Citation
Proceedings of the IEEE International Conference on Computer Vision, 2019, v. 2019-October, p. 5903-5912 How to Cite?
AbstractWe tackle the human motion imitation, appearance transfer, and novel view synthesis within a unified framework, which means that the model once being trained can be used to handle all these tasks. The existing task-specific methods mainly use 2D keypoints (pose) to estimate the human body structure. However, they only expresses the position information with no abilities to characterize the personalized shape of the individual person and model the limbs rotations. In this paper, we propose to use a 3D body mesh recovery module to disentangle the pose and shape, which can not only model the joint location and rotation but also characterize the personalized body shape. To preserve the source information, such as texture, style, color, and face identity, we propose a Liquid Warping GAN with Liquid Warping Block (LWB) that propagates the source information in both image and feature spaces, and synthesizes an image with respect to the reference. Specifically, the source features are extracted by a denoising convolutional auto-encoder for characterizing the source identity well. Furthermore, our proposed method is able to support a more flexible warping from multiple sources. In addition, we build a new dataset, namely Impersonator (iPER) dataset, for the evaluation of human motion imitation, appearance transfer, and novel view synthesis. Extensive experiments demonstrate the effectiveness of our method in several aspects, such as robustness in occlusion case and preserving face identity, shape consistency and clothes details. All codes and datasets are available on https://svip-lab.github.io/project/impersonator.html.
Persistent Identifierhttp://hdl.handle.net/10722/345112
ISSN
2023 SCImago Journal Rankings: 12.263

 

DC FieldValueLanguage
dc.contributor.authorLiu, Wen-
dc.contributor.authorPiao, Zhixin-
dc.contributor.authorMin, Jie-
dc.contributor.authorLuo, Wenhan-
dc.contributor.authorMa, Lin-
dc.contributor.authorGao, Shenghua-
dc.date.accessioned2024-08-15T09:25:20Z-
dc.date.available2024-08-15T09:25:20Z-
dc.date.issued2019-
dc.identifier.citationProceedings of the IEEE International Conference on Computer Vision, 2019, v. 2019-October, p. 5903-5912-
dc.identifier.issn1550-5499-
dc.identifier.urihttp://hdl.handle.net/10722/345112-
dc.description.abstractWe tackle the human motion imitation, appearance transfer, and novel view synthesis within a unified framework, which means that the model once being trained can be used to handle all these tasks. The existing task-specific methods mainly use 2D keypoints (pose) to estimate the human body structure. However, they only expresses the position information with no abilities to characterize the personalized shape of the individual person and model the limbs rotations. In this paper, we propose to use a 3D body mesh recovery module to disentangle the pose and shape, which can not only model the joint location and rotation but also characterize the personalized body shape. To preserve the source information, such as texture, style, color, and face identity, we propose a Liquid Warping GAN with Liquid Warping Block (LWB) that propagates the source information in both image and feature spaces, and synthesizes an image with respect to the reference. Specifically, the source features are extracted by a denoising convolutional auto-encoder for characterizing the source identity well. Furthermore, our proposed method is able to support a more flexible warping from multiple sources. In addition, we build a new dataset, namely Impersonator (iPER) dataset, for the evaluation of human motion imitation, appearance transfer, and novel view synthesis. Extensive experiments demonstrate the effectiveness of our method in several aspects, such as robustness in occlusion case and preserving face identity, shape consistency and clothes details. All codes and datasets are available on https://svip-lab.github.io/project/impersonator.html.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE International Conference on Computer Vision-
dc.titleLiquid warping GAN: A unified framework for human motion imitation, appearance transfer and novel view synthesis-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCV.2019.00600-
dc.identifier.scopuseid_2-s2.0-85081894278-
dc.identifier.volume2019-October-
dc.identifier.spage5903-
dc.identifier.epage5912-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats