File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: A deep sum-product architecture for robust facial attributes analysis

TitleA deep sum-product architecture for robust facial attributes analysis
Authors
Keywordsdeep learning
face recognition
attributes
Issue Date2013
Citation
Proceedings of the IEEE International Conference on Computer Vision, 2013, p. 2864-2871 How to Cite?
AbstractRecent works have shown that facial attributes are useful in a number of applications such as face recognition and retrieval. However, estimating attributes in images with large variations remains a big challenge. This challenge is addressed in this paper. Unlike existing methods that assume the independence of attributes during their estimation, our approach captures the interdependencies of local regions for each attribute, as well as the high-order correlations between different attributes, which makes it more robust to occlusions and misdetection of face regions. First, we have modeled region interdependencies with a discriminative decision tree, where each node consists of a detector and a classifier trained on a local region. The detector allows us to locate the region, while the classifier determines the presence or absence of an attribute. Second, correlations of attributes and attribute predictors are modeled by organizing all of the decision trees into a large sum-product network (SPN), which is learned by the EM algorithm and yields the most probable explanation (MPE) of the facial attributes in terms of the region's localization and classification. Experimental results on a large data set with 22,400 images show the effectiveness of the proposed approach. © 2013 IEEE.
Persistent Identifierhttp://hdl.handle.net/10722/273662
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLuo, Ping-
dc.contributor.authorWang, Xiaogang-
dc.contributor.authorTang, Xiaoou-
dc.date.accessioned2019-08-12T09:56:18Z-
dc.date.available2019-08-12T09:56:18Z-
dc.date.issued2013-
dc.identifier.citationProceedings of the IEEE International Conference on Computer Vision, 2013, p. 2864-2871-
dc.identifier.urihttp://hdl.handle.net/10722/273662-
dc.description.abstractRecent works have shown that facial attributes are useful in a number of applications such as face recognition and retrieval. However, estimating attributes in images with large variations remains a big challenge. This challenge is addressed in this paper. Unlike existing methods that assume the independence of attributes during their estimation, our approach captures the interdependencies of local regions for each attribute, as well as the high-order correlations between different attributes, which makes it more robust to occlusions and misdetection of face regions. First, we have modeled region interdependencies with a discriminative decision tree, where each node consists of a detector and a classifier trained on a local region. The detector allows us to locate the region, while the classifier determines the presence or absence of an attribute. Second, correlations of attributes and attribute predictors are modeled by organizing all of the decision trees into a large sum-product network (SPN), which is learned by the EM algorithm and yields the most probable explanation (MPE) of the facial attributes in terms of the region's localization and classification. Experimental results on a large data set with 22,400 images show the effectiveness of the proposed approach. © 2013 IEEE.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE International Conference on Computer Vision-
dc.subjectdeep learning-
dc.subjectface recognition-
dc.subjectattributes-
dc.titleA deep sum-product architecture for robust facial attributes analysis-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCV.2013.356-
dc.identifier.scopuseid_2-s2.0-84898796864-
dc.identifier.spage2864-
dc.identifier.epage2871-
dc.identifier.isiWOS:000351830500358-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats