File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Relational Part-Aware Learning for Complex Composite Object Detection in High-Resolution Remote Sensing Images

TitleRelational Part-Aware Learning for Complex Composite Object Detection in High-Resolution Remote Sensing Images
Authors
KeywordsComplex composite object detection
Correlation
Feature extraction
high-resolution remote sensing images (RSIs)
inter-relationship
Object detection
Power generation
Remote sensing
Semantics
Transformer
Transformers
Issue Date20-May-2024
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Cybernetics, 2024 How to Cite?
AbstractIn high-resolution remote sensing images (RSIs), complex composite object detection (e.g., coal-fired power plant detection and harbor detection) is challenging due to multiple discrete parts with variable layouts leading to complex weak inter-relationship and blurred boundaries, instead of a clearly defined single object. To address this issue, this article proposes an end-to-end framework, i.e., relational part-aware network (REPAN), to explore the semantic correlation and extract discriminative features among multiple parts. Specifically, we first design a part region proposal network (P-RPN) to locate discriminative yet subtle regions. With butterfly units (BFUs) embedded, feature-scale confusion problems stemming from aliasing effects can be largely alleviated. Second, a feature relation Transformer (FRT) plumbs the depths of the spatial relationships by part-and-global joint learning, exploring correlations between various parts to enhance significant part representation. Finally, a contextual detector (CD) classifies and detects parts and the whole composite object through multirelation-aware features, where part information guides to locate the whole object. We collect three remote sensing object detection datasets with four categories to evaluate our method. Consistently surpassing the performance of state-of-the-art methods, the results of extensive experiments underscore the effectiveness and superiority of our proposed method.
Persistent Identifierhttp://hdl.handle.net/10722/348135
ISSN
2023 Impact Factor: 9.4
2023 SCImago Journal Rankings: 5.641

 

DC FieldValueLanguage
dc.contributor.authorYuan, Shuai-
dc.contributor.authorZhang, Lixian-
dc.contributor.authorDong, Runmin-
dc.contributor.authorXiong, Jie-
dc.contributor.authorZheng, Juepeng-
dc.contributor.authorFu, Haohuan-
dc.contributor.authorGong, Peng-
dc.date.accessioned2024-10-05T00:30:45Z-
dc.date.available2024-10-05T00:30:45Z-
dc.date.issued2024-05-20-
dc.identifier.citationIEEE Transactions on Cybernetics, 2024-
dc.identifier.issn2168-2275-
dc.identifier.urihttp://hdl.handle.net/10722/348135-
dc.description.abstractIn high-resolution remote sensing images (RSIs), complex composite object detection (e.g., coal-fired power plant detection and harbor detection) is challenging due to multiple discrete parts with variable layouts leading to complex weak inter-relationship and blurred boundaries, instead of a clearly defined single object. To address this issue, this article proposes an end-to-end framework, i.e., relational part-aware network (REPAN), to explore the semantic correlation and extract discriminative features among multiple parts. Specifically, we first design a part region proposal network (P-RPN) to locate discriminative yet subtle regions. With butterfly units (BFUs) embedded, feature-scale confusion problems stemming from aliasing effects can be largely alleviated. Second, a feature relation Transformer (FRT) plumbs the depths of the spatial relationships by part-and-global joint learning, exploring correlations between various parts to enhance significant part representation. Finally, a contextual detector (CD) classifies and detects parts and the whole composite object through multirelation-aware features, where part information guides to locate the whole object. We collect three remote sensing object detection datasets with four categories to evaluate our method. Consistently surpassing the performance of state-of-the-art methods, the results of extensive experiments underscore the effectiveness and superiority of our proposed method.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Cybernetics-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectComplex composite object detection-
dc.subjectCorrelation-
dc.subjectFeature extraction-
dc.subjecthigh-resolution remote sensing images (RSIs)-
dc.subjectinter-relationship-
dc.subjectObject detection-
dc.subjectPower generation-
dc.subjectRemote sensing-
dc.subjectSemantics-
dc.subjectTransformer-
dc.subjectTransformers-
dc.titleRelational Part-Aware Learning for Complex Composite Object Detection in High-Resolution Remote Sensing Images-
dc.typeArticle-
dc.identifier.doi10.1109/TCYB.2024.3392474-
dc.identifier.scopuseid_2-s2.0-85194037077-
dc.identifier.issnl2168-2267-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats