File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: MOS: A Low Latency and Lightweight Framework for Face Detection, Landmark Localization, and Head Pose Estimation

TitleMOS: A Low Latency and Lightweight Framework for Face Detection, Landmark Localization, and Head Pose Estimation
Authors
Issue Date2021
Citation
32nd British Machine Vision Conference, BMVC 2021, 2021 How to Cite?
AbstractWith the emergence of service robots and surveillance cameras, dynamic face recognition (DFR) in wild has received much attention in recent years. Face detection and head pose estimation are two important steps for DFR. Very often, the pose is estimated after the face detection. However, such sequential computations lead to higher latency. In this paper, we propose a low latency and lightweight network for simultaneous face detection, landmark localization and head pose estimation. Inspired by the observation that it is more challenging to locate the facial landmarks for faces with large angles, a pose loss is proposed to constrain the learning. Moreover, we also propose an uncertainty multi-task loss to learn the weights of individual tasks automatically. Another challenge is that robots often use low computational units like ARM based computing core and we often need to use lightweight networks instead of the heavy ones, which lead to performance drop especially for small and hard faces. In this paper, we propose online feedback sampling to augment the training samples across different scales, which increases the diversity of training data automatically. Through validation in commonly used WIDER FACE, AFLW and AFLW2000 datasets, the results show that the proposed method achieves the state-of-the-art performance in low computational resources.
Persistent Identifierhttp://hdl.handle.net/10722/345363

 

DC FieldValueLanguage
dc.contributor.authorLiu, Yepeng-
dc.contributor.authorGu, Zaiwang-
dc.contributor.authorGao, Shenghua-
dc.contributor.authorWang, Dong-
dc.contributor.authorZeng, Yusheng-
dc.contributor.authorCheng, Jun-
dc.date.accessioned2024-08-15T09:26:53Z-
dc.date.available2024-08-15T09:26:53Z-
dc.date.issued2021-
dc.identifier.citation32nd British Machine Vision Conference, BMVC 2021, 2021-
dc.identifier.urihttp://hdl.handle.net/10722/345363-
dc.description.abstractWith the emergence of service robots and surveillance cameras, dynamic face recognition (DFR) in wild has received much attention in recent years. Face detection and head pose estimation are two important steps for DFR. Very often, the pose is estimated after the face detection. However, such sequential computations lead to higher latency. In this paper, we propose a low latency and lightweight network for simultaneous face detection, landmark localization and head pose estimation. Inspired by the observation that it is more challenging to locate the facial landmarks for faces with large angles, a pose loss is proposed to constrain the learning. Moreover, we also propose an uncertainty multi-task loss to learn the weights of individual tasks automatically. Another challenge is that robots often use low computational units like ARM based computing core and we often need to use lightweight networks instead of the heavy ones, which lead to performance drop especially for small and hard faces. In this paper, we propose online feedback sampling to augment the training samples across different scales, which increases the diversity of training data automatically. Through validation in commonly used WIDER FACE, AFLW and AFLW2000 datasets, the results show that the proposed method achieves the state-of-the-art performance in low computational resources.-
dc.languageeng-
dc.relation.ispartof32nd British Machine Vision Conference, BMVC 2021-
dc.titleMOS: A Low Latency and Lightweight Framework for Face Detection, Landmark Localization, and Head Pose Estimation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85176090163-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats