File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/ICCV.2017.46
- Scopus: eid_2-s2.0-85041891926
- WOS: WOS:000425498400037
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: HydraPlus-Net: Attentive Deep Features for Pedestrian Analysis
Title | HydraPlus-Net: Attentive Deep Features for Pedestrian Analysis |
---|---|
Authors | |
Issue Date | 2017 |
Citation | Proceedings of the IEEE International Conference on Computer Vision, 2017, v. 2017-October, p. 350-359 How to Cite? |
Abstract | Pedestrian analysis plays a vital role in intelligent video surveillance and is a key component for security-centric computer vision systems. Despite that the convolutional neural networks are remarkable in learning discriminative features from images, the learning of comprehensive features of pedestrians for fine-grained tasks remains an open problem. In this study, we propose a new attentionbased deep neural network, named as HydraPlus-Net (HPnet), that multi-directionally feeds the multi-level attention maps to different feature layers. The attentive deep features learned from the proposed HP-net bring unique advantages: (1) the model is capable of capturing multiple attentions from low-level to semantic-level, and (2) it explores the multi-scale selectiveness of attentive features to enrich the final feature representations for a pedestrian image. We demonstrate the effectiveness and generality of the proposed HP-net for pedestrian analysis on two tasks, i.e. pedestrian attribute recognition and person reidentification. Intensive experimental results have been provided to prove that the HP-net outperforms the state-of-theart methods on various datasets. |
Persistent Identifier | http://hdl.handle.net/10722/316488 |
ISSN | 2023 SCImago Journal Rankings: 12.263 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, Xihui | - |
dc.contributor.author | Zhao, Haiyu | - |
dc.contributor.author | Tian, Maoqing | - |
dc.contributor.author | Sheng, Lu | - |
dc.contributor.author | Shao, Jing | - |
dc.contributor.author | Yi, Shuai | - |
dc.contributor.author | Yan, Junjie | - |
dc.contributor.author | Wang, Xiaogang | - |
dc.date.accessioned | 2022-09-14T11:40:34Z | - |
dc.date.available | 2022-09-14T11:40:34Z | - |
dc.date.issued | 2017 | - |
dc.identifier.citation | Proceedings of the IEEE International Conference on Computer Vision, 2017, v. 2017-October, p. 350-359 | - |
dc.identifier.issn | 1550-5499 | - |
dc.identifier.uri | http://hdl.handle.net/10722/316488 | - |
dc.description.abstract | Pedestrian analysis plays a vital role in intelligent video surveillance and is a key component for security-centric computer vision systems. Despite that the convolutional neural networks are remarkable in learning discriminative features from images, the learning of comprehensive features of pedestrians for fine-grained tasks remains an open problem. In this study, we propose a new attentionbased deep neural network, named as HydraPlus-Net (HPnet), that multi-directionally feeds the multi-level attention maps to different feature layers. The attentive deep features learned from the proposed HP-net bring unique advantages: (1) the model is capable of capturing multiple attentions from low-level to semantic-level, and (2) it explores the multi-scale selectiveness of attentive features to enrich the final feature representations for a pedestrian image. We demonstrate the effectiveness and generality of the proposed HP-net for pedestrian analysis on two tasks, i.e. pedestrian attribute recognition and person reidentification. Intensive experimental results have been provided to prove that the HP-net outperforms the state-of-theart methods on various datasets. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the IEEE International Conference on Computer Vision | - |
dc.title | HydraPlus-Net: Attentive Deep Features for Pedestrian Analysis | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/ICCV.2017.46 | - |
dc.identifier.scopus | eid_2-s2.0-85041891926 | - |
dc.identifier.volume | 2017-October | - |
dc.identifier.spage | 350 | - |
dc.identifier.epage | 359 | - |
dc.identifier.isi | WOS:000425498400037 | - |