File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/ICCV.2019.00600
- Scopus: eid_2-s2.0-85081894278
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Liquid warping GAN: A unified framework for human motion imitation, appearance transfer and novel view synthesis
Title | Liquid warping GAN: A unified framework for human motion imitation, appearance transfer and novel view synthesis |
---|---|
Authors | |
Issue Date | 2019 |
Citation | Proceedings of the IEEE International Conference on Computer Vision, 2019, v. 2019-October, p. 5903-5912 How to Cite? |
Abstract | We tackle the human motion imitation, appearance transfer, and novel view synthesis within a unified framework, which means that the model once being trained can be used to handle all these tasks. The existing task-specific methods mainly use 2D keypoints (pose) to estimate the human body structure. However, they only expresses the position information with no abilities to characterize the personalized shape of the individual person and model the limbs rotations. In this paper, we propose to use a 3D body mesh recovery module to disentangle the pose and shape, which can not only model the joint location and rotation but also characterize the personalized body shape. To preserve the source information, such as texture, style, color, and face identity, we propose a Liquid Warping GAN with Liquid Warping Block (LWB) that propagates the source information in both image and feature spaces, and synthesizes an image with respect to the reference. Specifically, the source features are extracted by a denoising convolutional auto-encoder for characterizing the source identity well. Furthermore, our proposed method is able to support a more flexible warping from multiple sources. In addition, we build a new dataset, namely Impersonator (iPER) dataset, for the evaluation of human motion imitation, appearance transfer, and novel view synthesis. Extensive experiments demonstrate the effectiveness of our method in several aspects, such as robustness in occlusion case and preserving face identity, shape consistency and clothes details. All codes and datasets are available on https://svip-lab.github.io/project/impersonator.html. |
Persistent Identifier | http://hdl.handle.net/10722/345112 |
ISSN | 2023 SCImago Journal Rankings: 12.263 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, Wen | - |
dc.contributor.author | Piao, Zhixin | - |
dc.contributor.author | Min, Jie | - |
dc.contributor.author | Luo, Wenhan | - |
dc.contributor.author | Ma, Lin | - |
dc.contributor.author | Gao, Shenghua | - |
dc.date.accessioned | 2024-08-15T09:25:20Z | - |
dc.date.available | 2024-08-15T09:25:20Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | Proceedings of the IEEE International Conference on Computer Vision, 2019, v. 2019-October, p. 5903-5912 | - |
dc.identifier.issn | 1550-5499 | - |
dc.identifier.uri | http://hdl.handle.net/10722/345112 | - |
dc.description.abstract | We tackle the human motion imitation, appearance transfer, and novel view synthesis within a unified framework, which means that the model once being trained can be used to handle all these tasks. The existing task-specific methods mainly use 2D keypoints (pose) to estimate the human body structure. However, they only expresses the position information with no abilities to characterize the personalized shape of the individual person and model the limbs rotations. In this paper, we propose to use a 3D body mesh recovery module to disentangle the pose and shape, which can not only model the joint location and rotation but also characterize the personalized body shape. To preserve the source information, such as texture, style, color, and face identity, we propose a Liquid Warping GAN with Liquid Warping Block (LWB) that propagates the source information in both image and feature spaces, and synthesizes an image with respect to the reference. Specifically, the source features are extracted by a denoising convolutional auto-encoder for characterizing the source identity well. Furthermore, our proposed method is able to support a more flexible warping from multiple sources. In addition, we build a new dataset, namely Impersonator (iPER) dataset, for the evaluation of human motion imitation, appearance transfer, and novel view synthesis. Extensive experiments demonstrate the effectiveness of our method in several aspects, such as robustness in occlusion case and preserving face identity, shape consistency and clothes details. All codes and datasets are available on https://svip-lab.github.io/project/impersonator.html. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the IEEE International Conference on Computer Vision | - |
dc.title | Liquid warping GAN: A unified framework for human motion imitation, appearance transfer and novel view synthesis | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/ICCV.2019.00600 | - |
dc.identifier.scopus | eid_2-s2.0-85081894278 | - |
dc.identifier.volume | 2019-October | - |
dc.identifier.spage | 5903 | - |
dc.identifier.epage | 5912 | - |