File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Energy-Based Periodicity Mining with Deep Features for Action Repetition Counting in Unconstrained Videos

TitleEnergy-Based Periodicity Mining with Deep Features for Action Repetition Counting in Unconstrained Videos
Authors
KeywordsAction repetition counting
deep ConvNets
Issue Date2021
Citation
IEEE Transactions on Circuits and Systems for Video Technology, 2021, v. 31, n. 12, p. 4812-4825 How to Cite?
AbstractAction repetition counting is to estimate the occurrence times of the repetitive motion in one action, which is a relatively new, significant, but challenging problem. To solve this problem, we propose a new method superior to the traditional ways in two aspects, without preprocessing and applicable for arbitrary periodicity actions. Without preprocessing, the proposed model makes our scheme convenient for real applications; processing the arbitrary periodicity action makes our model more suitable for the actual circumstance. In terms of methodology, firstly, we extract action features using ConvNets and then use Principal Component Analysis algorithm to generate the intuitive periodic information from the chaotic high-dimensional features; secondly, we propose an energy-based adaptive feature mode selection scheme to adaptively select proper deep feature mode according to the background of the video; thirdly,we construct the periodic waveform of the action based on the high-energy rules by filtering the irrelevant information. Finally, we detect the peaks to obtain the times of the action repetition. Our work features two-fold: 1) We give a significant insight that features extracted by ConvNets for action recognition can well model the self-similarity periodicity of the repetitive action. 2) A high-energy based periodicity mining rule using features from ConvNets is presented, which can process arbitrary actions without preprocessing. Experimental results show that our method achieves superior or comparable performance on the three benchmark datasets, i.e. YT-Segments, QUVA, and RARV.
Persistent Identifierhttp://hdl.handle.net/10722/349526
ISSN
2023 Impact Factor: 8.3
2023 SCImago Journal Rankings: 2.299

 

DC FieldValueLanguage
dc.contributor.authorYin, Jianqin-
dc.contributor.authorWu, Yanchun-
dc.contributor.authorZhu, Chaoran-
dc.contributor.authorYin, Zijin-
dc.contributor.authorLiu, Huaping-
dc.contributor.authorDang, Yonghao-
dc.contributor.authorLiu, Zhiyi-
dc.contributor.authorLiu, Jun-
dc.date.accessioned2024-10-17T06:59:07Z-
dc.date.available2024-10-17T06:59:07Z-
dc.date.issued2021-
dc.identifier.citationIEEE Transactions on Circuits and Systems for Video Technology, 2021, v. 31, n. 12, p. 4812-4825-
dc.identifier.issn1051-8215-
dc.identifier.urihttp://hdl.handle.net/10722/349526-
dc.description.abstractAction repetition counting is to estimate the occurrence times of the repetitive motion in one action, which is a relatively new, significant, but challenging problem. To solve this problem, we propose a new method superior to the traditional ways in two aspects, without preprocessing and applicable for arbitrary periodicity actions. Without preprocessing, the proposed model makes our scheme convenient for real applications; processing the arbitrary periodicity action makes our model more suitable for the actual circumstance. In terms of methodology, firstly, we extract action features using ConvNets and then use Principal Component Analysis algorithm to generate the intuitive periodic information from the chaotic high-dimensional features; secondly, we propose an energy-based adaptive feature mode selection scheme to adaptively select proper deep feature mode according to the background of the video; thirdly,we construct the periodic waveform of the action based on the high-energy rules by filtering the irrelevant information. Finally, we detect the peaks to obtain the times of the action repetition. Our work features two-fold: 1) We give a significant insight that features extracted by ConvNets for action recognition can well model the self-similarity periodicity of the repetitive action. 2) A high-energy based periodicity mining rule using features from ConvNets is presented, which can process arbitrary actions without preprocessing. Experimental results show that our method achieves superior or comparable performance on the three benchmark datasets, i.e. YT-Segments, QUVA, and RARV.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Circuits and Systems for Video Technology-
dc.subjectAction repetition counting-
dc.subjectdeep ConvNets-
dc.titleEnergy-Based Periodicity Mining with Deep Features for Action Repetition Counting in Unconstrained Videos-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TCSVT.2021.3055220-
dc.identifier.scopuseid_2-s2.0-85100493902-
dc.identifier.volume31-
dc.identifier.issue12-
dc.identifier.spage4812-
dc.identifier.epage4825-
dc.identifier.eissn1558-2205-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats