File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Towards Photo-Realistic Virtual Try-On by Adaptively Generating<->Preserving Image Content

TitleTowards Photo-Realistic Virtual Try-On by Adaptively Generating<->Preserving Image Content
Authors
KeywordsSemantics
Clothing
Layout
Visualization
Image segmentation
Issue Date2020
PublisherIEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000147
Citation
Proceedings of IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR 2020), Seattle, USA, 13-19 June 2020, p. 7847-7856 How to Cite?
AbstractImage visual try-on aims at transferring a target clothes image onto a reference person, and has become a hot topic in recent years. Prior arts usually focus on preserving the character of a clothes image (e.g. texture, logo, embroidery) when warping it to arbitrary human pose. However, it remains a big challenge to generate photo-realistic try-on images when large occlusions and human poses are presented in the reference person. To address this issue, we propose a novel visual try-on network, namely Adaptive Content Generating and Preserving Network (ACGPN). In particular, ACGPN first predicts semantic layout of the reference image that will be changed after try-on (e.g.long sleeve shirt-arm, arm-jacket), and then determines whether its image content needs to be generated or preserved according to the predicted semantic layout, leading to photo-realistic try-on and rich clothes details. ACGPN generally involves three major modules. First, a semantic layout generation module utilizes semantic segmentation of the reference image to progressively predict the desired semantic layout after try-on. Second, a clothes warping module warps clothes image according to the generated semantic layout, where a second-order difference constraint is introduced to stabilize the warping process during training.Third, an inpainting module for content fusion integrates all information (e.g. reference image, semantic layout, warped clothes) to adaptively produce each semantic part of human body. In comparison to the state-of-the-art methods, ACGPN can generate photo-realistic images with much better perceptual quality and richer fine-details.
DescriptionSession: Poster 2.3 — Face, Gesture, and Body Pose; Motion and Tracking; Image and Video Synthesis; Nearal Generative Models; Optimization and Learning Methods - Poster no. 50 ; Paper ID 5162
CVPR 2020 held virtually due to COVID-19
Persistent Identifierhttp://hdl.handle.net/10722/284262
ISSN
2023 SCImago Journal Rankings: 10.331

 

DC FieldValueLanguage
dc.contributor.authorYang, H-
dc.contributor.authorZhang, R-
dc.contributor.authorGuo, X-
dc.contributor.authorLiu, W-
dc.contributor.authorZuo, W-
dc.contributor.authorLuo, P-
dc.date.accessioned2020-07-20T05:57:20Z-
dc.date.available2020-07-20T05:57:20Z-
dc.date.issued2020-
dc.identifier.citationProceedings of IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR 2020), Seattle, USA, 13-19 June 2020, p. 7847-7856-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/284262-
dc.descriptionSession: Poster 2.3 — Face, Gesture, and Body Pose; Motion and Tracking; Image and Video Synthesis; Nearal Generative Models; Optimization and Learning Methods - Poster no. 50 ; Paper ID 5162-
dc.descriptionCVPR 2020 held virtually due to COVID-19-
dc.description.abstractImage visual try-on aims at transferring a target clothes image onto a reference person, and has become a hot topic in recent years. Prior arts usually focus on preserving the character of a clothes image (e.g. texture, logo, embroidery) when warping it to arbitrary human pose. However, it remains a big challenge to generate photo-realistic try-on images when large occlusions and human poses are presented in the reference person. To address this issue, we propose a novel visual try-on network, namely Adaptive Content Generating and Preserving Network (ACGPN). In particular, ACGPN first predicts semantic layout of the reference image that will be changed after try-on (e.g.long sleeve shirt-arm, arm-jacket), and then determines whether its image content needs to be generated or preserved according to the predicted semantic layout, leading to photo-realistic try-on and rich clothes details. ACGPN generally involves three major modules. First, a semantic layout generation module utilizes semantic segmentation of the reference image to progressively predict the desired semantic layout after try-on. Second, a clothes warping module warps clothes image according to the generated semantic layout, where a second-order difference constraint is introduced to stabilize the warping process during training.Third, an inpainting module for content fusion integrates all information (e.g. reference image, semantic layout, warped clothes) to adaptively produce each semantic part of human body. In comparison to the state-of-the-art methods, ACGPN can generate photo-realistic images with much better perceptual quality and richer fine-details.-
dc.languageeng-
dc.publisherIEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000147-
dc.relation.ispartofIEEE Conference on Computer Vision and Pattern Recognition. Proceedings-
dc.rightsIEEE Conference on Computer Vision and Pattern Recognition. Proceedings. Copyright © IEEE Computer Society.-
dc.rights©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectSemantics-
dc.subjectClothing-
dc.subjectLayout-
dc.subjectVisualization-
dc.subjectImage segmentation-
dc.titleTowards Photo-Realistic Virtual Try-On by Adaptively Generating<->Preserving Image Content-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.doi10.1109/CVPR42600.2020.00787-
dc.identifier.scopuseid_2-s2.0-85094826284-
dc.identifier.hkuros311025-
dc.identifier.spage7847-
dc.identifier.epage7856-
dc.publisher.placeUnited States-
dc.identifier.issnl1063-6919-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats