File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Deep learning face attributes in the wild

TitleDeep learning face attributes in the wild
Authors
Issue Date2015
Citation
Proceedings of the IEEE International Conference on Computer Vision, 2015, v. 2015 International Conference on Computer Vision, ICCV 2015, p. 3730-3738 How to Cite?
Abstract© 2015 IEEE. Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts.
Persistent Identifierhttp://hdl.handle.net/10722/273720
ISSN

 

DC FieldValueLanguage
dc.contributor.authorLiu, Ziwei-
dc.contributor.authorLuo, Ping-
dc.contributor.authorWang, Xiaogang-
dc.contributor.authorTang, Xiaoou-
dc.date.accessioned2019-08-12T09:56:27Z-
dc.date.available2019-08-12T09:56:27Z-
dc.date.issued2015-
dc.identifier.citationProceedings of the IEEE International Conference on Computer Vision, 2015, v. 2015 International Conference on Computer Vision, ICCV 2015, p. 3730-3738-
dc.identifier.issn1550-5499-
dc.identifier.urihttp://hdl.handle.net/10722/273720-
dc.description.abstract© 2015 IEEE. Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE International Conference on Computer Vision-
dc.titleDeep learning face attributes in the wild-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCV.2015.425-
dc.identifier.scopuseid_2-s2.0-84973917446-
dc.identifier.volume2015 International Conference on Computer Vision, ICCV 2015-
dc.identifier.spage3730-
dc.identifier.epage3738-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats