File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Action Categorization based on Arm Pose Modeling
Title | Action Categorization based on Arm Pose Modeling |
---|---|
Authors | |
Issue Date | 2014 |
Citation | The 9th International Conference on Computer Vision Theory and Applications (VISAPP), Lisbon, Portugal, 5-8 January 2014, v. 2, p. 39-47 How to Cite? |
Abstract | This paper proposes a novel method to categorize human action based on arm pose modeling. Traditionally, human action categorization relies much on the extracted features from video or images. In this research, we exploit the relationship between action categorization and arm pose modeling, which can be visualized in a graphic model. Given visual observations, both states can be estimated by maximum a posteriori (MAP) in that arm poses are first estimated under the hypothesis of action category by dynamic programming, and then action category hypothesis is validated by soft-max model based on the estimated arm poses. The prior distribution for every action is estimated by a semi-parametric estimator in advance, and pixel-based dense features including LBP, SIFT, colour-SIFT, and texton are utilized to enhance the likelihood computation by the joint Adaboosting algorithm. The proposed method has been evaluated on videos of walking, waving and jog from the HumanEva-I dataset. It is found to have arm pose modeling performance better than the method of mixtures of parts, and action categorization success rate of 96.69%. |
Description | Area 3 - Image and Video Understanding Full Papers; paper no. 88 |
Persistent Identifier | http://hdl.handle.net/10722/204076 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Li, C | en_US |
dc.contributor.author | Yung, NHC | en_US |
dc.date.accessioned | 2014-09-19T20:04:22Z | - |
dc.date.available | 2014-09-19T20:04:22Z | - |
dc.date.issued | 2014 | en_US |
dc.identifier.citation | The 9th International Conference on Computer Vision Theory and Applications (VISAPP), Lisbon, Portugal, 5-8 January 2014, v. 2, p. 39-47 | en_US |
dc.identifier.uri | http://hdl.handle.net/10722/204076 | - |
dc.description | Area 3 - Image and Video Understanding | - |
dc.description | Full Papers; paper no. 88 | - |
dc.description.abstract | This paper proposes a novel method to categorize human action based on arm pose modeling. Traditionally, human action categorization relies much on the extracted features from video or images. In this research, we exploit the relationship between action categorization and arm pose modeling, which can be visualized in a graphic model. Given visual observations, both states can be estimated by maximum a posteriori (MAP) in that arm poses are first estimated under the hypothesis of action category by dynamic programming, and then action category hypothesis is validated by soft-max model based on the estimated arm poses. The prior distribution for every action is estimated by a semi-parametric estimator in advance, and pixel-based dense features including LBP, SIFT, colour-SIFT, and texton are utilized to enhance the likelihood computation by the joint Adaboosting algorithm. The proposed method has been evaluated on videos of walking, waving and jog from the HumanEva-I dataset. It is found to have arm pose modeling performance better than the method of mixtures of parts, and action categorization success rate of 96.69%. | - |
dc.language | eng | en_US |
dc.relation.ispartof | International Conference on Computer Vision Theory and Applications (VISAPP) | en_US |
dc.title | Action Categorization based on Arm Pose Modeling | en_US |
dc.type | Conference_Paper | en_US |
dc.identifier.email | Yung, NHC: nyung@eee.hku.hk | en_US |
dc.identifier.authority | Yung, NHC=rp00226 | en_US |
dc.identifier.hkuros | 238538 | en_US |
dc.identifier.volume | 2 | en_US |
dc.identifier.spage | 39 | en_US |
dc.identifier.epage | 47 | en_US |