File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1016/j.neucom.2020.01.123
- Scopus: eid_2-s2.0-85099514216
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: SUNNet: A novel framework for simultaneous human parsing and pose estimation
Title | SUNNet: A novel framework for simultaneous human parsing and pose estimation |
---|---|
Authors | |
Keywords | Human parsing estimation Human pose estimation |
Issue Date | 2021 |
Citation | Neurocomputing, 2021, v. 444, p. 349-355 How to Cite? |
Abstract | This paper presents a novel Separation-and-UnioN Network (SUNNet) for simultaneous human parsing and pose estimation. Our SUNNet consists of two stages: a feature separation stage and a feature union stage. In feature separation stage, we leverage a common feature extractor to implicitly encode the correlation between human parsing and pose estimation, meanwhile, two task-specific feature extractors are designed to extract the features for both tasks. By combining the task-specific features and common features with a feature consolidation module in a coarse-to-fine manner, we can get an initial prediction for parsing and pose estimation; In feature union stage, we refine the initial prediction by explicitly leveraging the features from parallel task to predict the kernels’ receptive fields in a convolutional neural network. We further propose to leverage a 3D human body reconstructed from the image to facilitate these tasks, and a novel Gated Feature Fusion (GFF) block is designed to automatically decide whether to use or skip the priors from the reconstructed 3D human body. Extensive experiments demonstrate the effectiveness of our SUNNet model for human body configuration analysis. |
Persistent Identifier | http://hdl.handle.net/10722/345128 |
ISSN | 2023 Impact Factor: 5.5 2023 SCImago Journal Rankings: 1.815 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Xu, Yanyu | - |
dc.contributor.author | Piao, Zhixin | - |
dc.contributor.author | Zhang, Ziheng | - |
dc.contributor.author | Liu, Wen | - |
dc.contributor.author | Gao, Shenghua | - |
dc.date.accessioned | 2024-08-15T09:25:26Z | - |
dc.date.available | 2024-08-15T09:25:26Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Neurocomputing, 2021, v. 444, p. 349-355 | - |
dc.identifier.issn | 0925-2312 | - |
dc.identifier.uri | http://hdl.handle.net/10722/345128 | - |
dc.description.abstract | This paper presents a novel Separation-and-UnioN Network (SUNNet) for simultaneous human parsing and pose estimation. Our SUNNet consists of two stages: a feature separation stage and a feature union stage. In feature separation stage, we leverage a common feature extractor to implicitly encode the correlation between human parsing and pose estimation, meanwhile, two task-specific feature extractors are designed to extract the features for both tasks. By combining the task-specific features and common features with a feature consolidation module in a coarse-to-fine manner, we can get an initial prediction for parsing and pose estimation; In feature union stage, we refine the initial prediction by explicitly leveraging the features from parallel task to predict the kernels’ receptive fields in a convolutional neural network. We further propose to leverage a 3D human body reconstructed from the image to facilitate these tasks, and a novel Gated Feature Fusion (GFF) block is designed to automatically decide whether to use or skip the priors from the reconstructed 3D human body. Extensive experiments demonstrate the effectiveness of our SUNNet model for human body configuration analysis. | - |
dc.language | eng | - |
dc.relation.ispartof | Neurocomputing | - |
dc.subject | Human parsing estimation | - |
dc.subject | Human pose estimation | - |
dc.title | SUNNet: A novel framework for simultaneous human parsing and pose estimation | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1016/j.neucom.2020.01.123 | - |
dc.identifier.scopus | eid_2-s2.0-85099514216 | - |
dc.identifier.volume | 444 | - |
dc.identifier.spage | 349 | - |
dc.identifier.epage | 355 | - |
dc.identifier.eissn | 1872-8286 | - |