File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Facial Action Unit Intensity Estimation via Semantic Correspondence Learning with Dynamic Graph Convolution

TitleFacial Action Unit Intensity Estimation via Semantic Correspondence Learning with Dynamic Graph Convolution
Authors
Issue Date2020
PublisherAAAI Press. The Journal's web site is located at https://aaai.org/Library/AAAI/aaai-library.php
Citation
Proceedings of the 34th Association for the Advancement of Artificial Intelligence (AAAI) Conference on Artificial Intelligence (AAAI-20), New York, NY, USA, 7-12 February 2020, v. 34 n. 7, p. 12701-12708 How to Cite?
AbstractThe intensity estimation of facial action units (AUs) is challenging due to subtle changes in the person's facial appearance. Previous approaches mainly rely on probabilistic models or predefined rules for modeling co-occurrence relationships among AUs, leading to limited generalization. In contrast, we present a new learning framework that automatically learns the latent relationships of AUs via establishing semantic correspondences between feature maps. In the heatmap regression-based network, feature maps preserve rich semantic information associated with AU intensities and locations. Moreover, the AU co-occurring pattern can be reflected by activating a set of feature channels, where each channel encodes a specific visual pattern of AU. This motivates us to model the correlation among feature channels, which implicitly represents the co-occurrence relationship of AU intensity levels. Specifically, we introduce a semantic correspondence convolution (SCC) module to dynamically compute the correspondences from deep and low resolution feature maps, and thus enhancing the discriminability of features. The experimental results demonstrate the effectiveness and the superior performance of our method on two benchmark datasets.
DescriptionAAAI-20 Technical Tracks 7 / Session: AAAI Technical Track: Vision
Persistent Identifierhttp://hdl.handle.net/10722/288468
ISSN

 

DC FieldValueLanguage
dc.contributor.authorFan, Y-
dc.contributor.authorLam, JCK-
dc.contributor.authorLi, VOK-
dc.date.accessioned2020-10-05T12:13:21Z-
dc.date.available2020-10-05T12:13:21Z-
dc.date.issued2020-
dc.identifier.citationProceedings of the 34th Association for the Advancement of Artificial Intelligence (AAAI) Conference on Artificial Intelligence (AAAI-20), New York, NY, USA, 7-12 February 2020, v. 34 n. 7, p. 12701-12708-
dc.identifier.issn2159-5399-
dc.identifier.urihttp://hdl.handle.net/10722/288468-
dc.descriptionAAAI-20 Technical Tracks 7 / Session: AAAI Technical Track: Vision-
dc.description.abstractThe intensity estimation of facial action units (AUs) is challenging due to subtle changes in the person's facial appearance. Previous approaches mainly rely on probabilistic models or predefined rules for modeling co-occurrence relationships among AUs, leading to limited generalization. In contrast, we present a new learning framework that automatically learns the latent relationships of AUs via establishing semantic correspondences between feature maps. In the heatmap regression-based network, feature maps preserve rich semantic information associated with AU intensities and locations. Moreover, the AU co-occurring pattern can be reflected by activating a set of feature channels, where each channel encodes a specific visual pattern of AU. This motivates us to model the correlation among feature channels, which implicitly represents the co-occurrence relationship of AU intensity levels. Specifically, we introduce a semantic correspondence convolution (SCC) module to dynamically compute the correspondences from deep and low resolution feature maps, and thus enhancing the discriminability of features. The experimental results demonstrate the effectiveness and the superior performance of our method on two benchmark datasets.-
dc.languageeng-
dc.publisherAAAI Press. The Journal's web site is located at https://aaai.org/Library/AAAI/aaai-library.php-
dc.relation.ispartofProceedings of the AAAI Conference on Artificial Intelligence-
dc.titleFacial Action Unit Intensity Estimation via Semantic Correspondence Learning with Dynamic Graph Convolution-
dc.typeConference_Paper-
dc.identifier.emailLam, JCK: h9992013@hkucc.hku.hk-
dc.identifier.emailLi, VOK: vli@eee.hku.hk-
dc.identifier.authorityLam, JCK=rp00864-
dc.identifier.authorityLi, VOK=rp00150-
dc.identifier.doi10.1609/aaai.v34i07.6963-
dc.identifier.hkuros315147-
dc.identifier.volume34-
dc.identifier.issue7-
dc.identifier.spage12701-
dc.identifier.epage12708-
dc.publisher.placeUnited States-
dc.identifier.issnl2159-5399-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats