File Download

There are no files associated with this item.

Supplementary

Conference Paper: Action Categorization based on Arm Pose Modeling

TitleAction Categorization based on Arm Pose Modeling
Authors
Issue Date2014
Citation
The 9th International Conference on Computer Vision Theory and Applications (VISAPP), Lisbon, Portugal, 5-8 January 2014, v. 2, p. 39-47 How to Cite?
AbstractThis paper proposes a novel method to categorize human action based on arm pose modeling. Traditionally, human action categorization relies much on the extracted features from video or images. In this research, we exploit the relationship between action categorization and arm pose modeling, which can be visualized in a graphic model. Given visual observations, both states can be estimated by maximum a posteriori (MAP) in that arm poses are first estimated under the hypothesis of action category by dynamic programming, and then action category hypothesis is validated by soft-max model based on the estimated arm poses. The prior distribution for every action is estimated by a semi-parametric estimator in advance, and pixel-based dense features including LBP, SIFT, colour-SIFT, and texton are utilized to enhance the likelihood computation by the joint Adaboosting algorithm. The proposed method has been evaluated on videos of walking, waving and jog from the HumanEva-I dataset. It is found to have arm pose modeling performance better than the method of mixtures of parts, and action categorization success rate of 96.69%.
DescriptionArea 3 - Image and Video Understanding
Full Papers; paper no. 88
Persistent Identifierhttp://hdl.handle.net/10722/204076

 

DC FieldValueLanguage
dc.contributor.authorLi, Cen_US
dc.contributor.authorYung, NHCen_US
dc.date.accessioned2014-09-19T20:04:22Z-
dc.date.available2014-09-19T20:04:22Z-
dc.date.issued2014en_US
dc.identifier.citationThe 9th International Conference on Computer Vision Theory and Applications (VISAPP), Lisbon, Portugal, 5-8 January 2014, v. 2, p. 39-47en_US
dc.identifier.urihttp://hdl.handle.net/10722/204076-
dc.descriptionArea 3 - Image and Video Understanding-
dc.descriptionFull Papers; paper no. 88-
dc.description.abstractThis paper proposes a novel method to categorize human action based on arm pose modeling. Traditionally, human action categorization relies much on the extracted features from video or images. In this research, we exploit the relationship between action categorization and arm pose modeling, which can be visualized in a graphic model. Given visual observations, both states can be estimated by maximum a posteriori (MAP) in that arm poses are first estimated under the hypothesis of action category by dynamic programming, and then action category hypothesis is validated by soft-max model based on the estimated arm poses. The prior distribution for every action is estimated by a semi-parametric estimator in advance, and pixel-based dense features including LBP, SIFT, colour-SIFT, and texton are utilized to enhance the likelihood computation by the joint Adaboosting algorithm. The proposed method has been evaluated on videos of walking, waving and jog from the HumanEva-I dataset. It is found to have arm pose modeling performance better than the method of mixtures of parts, and action categorization success rate of 96.69%.-
dc.languageengen_US
dc.relation.ispartofInternational Conference on Computer Vision Theory and Applications (VISAPP)en_US
dc.titleAction Categorization based on Arm Pose Modelingen_US
dc.typeConference_Paperen_US
dc.identifier.emailYung, NHC: nyung@eee.hku.hken_US
dc.identifier.authorityYung, NHC=rp00226en_US
dc.identifier.hkuros238538en_US
dc.identifier.volume2en_US
dc.identifier.spage39en_US
dc.identifier.epage47en_US

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats