File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations

TitleDeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations
Authors
Issue Date2016
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, v. 2016-December, p. 1096-1104 How to Cite?
Abstract© 2016 IEEE. Recent advances in clothes recognition have been driven by the construction of clothes datasets. Existing datasets are limited in the amount of annotations and are difficult to cope with the various challenges in real-world applications. In this work, we introduce DeepFashion1, a large-scale clothes dataset with comprehensive annotations. It contains over 800,000 images, which are richly annotated with massive attributes, clothing landmarks, and correspondence of images taken under different scenarios including store, street snapshot, and consumer. Such rich annotations enable the development of powerful algorithms in clothes recognition and facilitating future researches. To demonstrate the advantages of DeepFashion, we propose a new deep model, namely FashionNet, which learns clothing features by jointly predicting clothing attributes and landmarks. The estimated landmarks are then employed to pool or gate the learned features. It is optimized in an iterative manner. Extensive experiments demonstrate the effectiveness of FashionNet and the usefulness of DeepFashion.
Persistent Identifierhttp://hdl.handle.net/10722/273570
ISSN
2020 SCImago Journal Rankings: 4.658
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLiu, Ziwei-
dc.contributor.authorLuo, Ping-
dc.contributor.authorQiu, Shi-
dc.contributor.authorWang, Xiaogang-
dc.contributor.authorTang, Xiaoou-
dc.date.accessioned2019-08-12T09:55:58Z-
dc.date.available2019-08-12T09:55:58Z-
dc.date.issued2016-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, v. 2016-December, p. 1096-1104-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/273570-
dc.description.abstract© 2016 IEEE. Recent advances in clothes recognition have been driven by the construction of clothes datasets. Existing datasets are limited in the amount of annotations and are difficult to cope with the various challenges in real-world applications. In this work, we introduce DeepFashion1, a large-scale clothes dataset with comprehensive annotations. It contains over 800,000 images, which are richly annotated with massive attributes, clothing landmarks, and correspondence of images taken under different scenarios including store, street snapshot, and consumer. Such rich annotations enable the development of powerful algorithms in clothes recognition and facilitating future researches. To demonstrate the advantages of DeepFashion, we propose a new deep model, namely FashionNet, which learns clothing features by jointly predicting clothing attributes and landmarks. The estimated landmarks are then employed to pool or gate the learned features. It is optimized in an iterative manner. Extensive experiments demonstrate the effectiveness of FashionNet and the usefulness of DeepFashion.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.titleDeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR.2016.124-
dc.identifier.scopuseid_2-s2.0-84986260103-
dc.identifier.volume2016-December-
dc.identifier.spage1096-
dc.identifier.epage1104-
dc.identifier.isiWOS:000400012301016-
dc.identifier.issnl1063-6919-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats