File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Temporal pyramid network for action recognition

TitleTemporal pyramid network for action recognition
Authors
Issue Date2020
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, p. 588-597 How to Cite?
AbstractVisual tempo characterizes the dynamics and the temporal scale of an action. Modeling such visual tempos of different actions facilitates their recognition. Previous works often capture the visual tempo through sampling raw videos at multiple rates and constructing an input-level frame pyramid, which usually requires a costly multi-branch network to handle. In this work we propose a generic Temporal Pyramid Network (TPN) at the feature-level, which can be flexibly integrated into 2D or 3D backbone networks in a plug-and-play manner. Two essential components of TPN, the source of features and the fusion of features, form a feature hierarchy for the backbone so that it can capture action instances at various tempos. TPN also shows consistent improvements over other challenging baselines on several action recognition datasets. Specifically, when equipped with TPN, the 3D ResNet-50 with dense sampling obtains a 2% gain on the validation set of Kinetics-400. A further analysis also reveals that TPN gains most of its improvements on action classes that have large variances in their visual tempos, validating the effectiveness of TPN.
Persistent Identifierhttp://hdl.handle.net/10722/352212
ISSN
2023 SCImago Journal Rankings: 10.331

 

DC FieldValueLanguage
dc.contributor.authorYang, Ceyuan-
dc.contributor.authorXu, Yinghao-
dc.contributor.authorShi, Jianping-
dc.contributor.authorDai, Bo-
dc.contributor.authorZhou, Bolei-
dc.date.accessioned2024-12-16T03:57:21Z-
dc.date.available2024-12-16T03:57:21Z-
dc.date.issued2020-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, p. 588-597-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/352212-
dc.description.abstractVisual tempo characterizes the dynamics and the temporal scale of an action. Modeling such visual tempos of different actions facilitates their recognition. Previous works often capture the visual tempo through sampling raw videos at multiple rates and constructing an input-level frame pyramid, which usually requires a costly multi-branch network to handle. In this work we propose a generic Temporal Pyramid Network (TPN) at the feature-level, which can be flexibly integrated into 2D or 3D backbone networks in a plug-and-play manner. Two essential components of TPN, the source of features and the fusion of features, form a feature hierarchy for the backbone so that it can capture action instances at various tempos. TPN also shows consistent improvements over other challenging baselines on several action recognition datasets. Specifically, when equipped with TPN, the 3D ResNet-50 with dense sampling obtains a 2% gain on the validation set of Kinetics-400. A further analysis also reveals that TPN gains most of its improvements on action classes that have large variances in their visual tempos, validating the effectiveness of TPN.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.titleTemporal pyramid network for action recognition-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR42600.2020.00067-
dc.identifier.scopuseid_2-s2.0-85094136205-
dc.identifier.spage588-
dc.identifier.epage597-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats