File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Pan-sharpening via Symmetric Multi-Scale Correction-Enhancement Transformers

TitlePan-sharpening via Symmetric Multi-Scale Correction-Enhancement Transformers
Authors
KeywordsPan-sharpening
Remote sensing image fusion
Self-similarity
Vision transformers
Issue Date1-Feb-2025
PublisherElsevier
Citation
Neural Networks, 2025, v. 185 How to Cite?
AbstractPan-sharpening is a widely employed technique for enhancing the quality and accuracy of remote sensing images, particularly in high-resolution image downstream tasks. However, existing deep-learning methods often neglect the self-similarity in remote sensing images. Ignoring it can result in poor fusion of texture and spectral details, leading to artifacts like ringing and reduced clarity in the fused image. To address these limitations, we propose the Symmetric Multi-Scale Correction-Enhancement Transformers (SMCET) model. SMCET incorporates a Self-Similarity Refinement Transformers (SSRT) module to capture self-similarity from frequency and spatial domain within a single scale, and an encoder–decoder framework to employ multi-scale transformations to simulate the self-similarity process across scales. Our experiments on multiple satellite datasets demonstrate that SMCET outperforms existing methods, offering superior texture and spectral details. The SMCET source code can be accessed at https://github.com/yonglleee/SMCET.
Persistent Identifierhttp://hdl.handle.net/10722/354808
ISSN
2023 Impact Factor: 6.0
2023 SCImago Journal Rankings: 2.605

 

DC FieldValueLanguage
dc.contributor.authorLi, Yong-
dc.contributor.authorWang, Yi-
dc.contributor.authorShi, Shuai-
dc.contributor.authorWang, Jiaming-
dc.contributor.authorWang, Ruiyang-
dc.contributor.authorLu, Mengqian-
dc.contributor.authorZhang, Fan-
dc.date.accessioned2025-03-11T00:35:10Z-
dc.date.available2025-03-11T00:35:10Z-
dc.date.issued2025-02-01-
dc.identifier.citationNeural Networks, 2025, v. 185-
dc.identifier.issn0893-6080-
dc.identifier.urihttp://hdl.handle.net/10722/354808-
dc.description.abstractPan-sharpening is a widely employed technique for enhancing the quality and accuracy of remote sensing images, particularly in high-resolution image downstream tasks. However, existing deep-learning methods often neglect the self-similarity in remote sensing images. Ignoring it can result in poor fusion of texture and spectral details, leading to artifacts like ringing and reduced clarity in the fused image. To address these limitations, we propose the Symmetric Multi-Scale Correction-Enhancement Transformers (SMCET) model. SMCET incorporates a Self-Similarity Refinement Transformers (SSRT) module to capture self-similarity from frequency and spatial domain within a single scale, and an encoder–decoder framework to employ multi-scale transformations to simulate the self-similarity process across scales. Our experiments on multiple satellite datasets demonstrate that SMCET outperforms existing methods, offering superior texture and spectral details. The SMCET source code can be accessed at https://github.com/yonglleee/SMCET.-
dc.languageeng-
dc.publisherElsevier-
dc.relation.ispartofNeural Networks-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectPan-sharpening-
dc.subjectRemote sensing image fusion-
dc.subjectSelf-similarity-
dc.subjectVision transformers-
dc.titlePan-sharpening via Symmetric Multi-Scale Correction-Enhancement Transformers-
dc.typeArticle-
dc.identifier.doi10.1016/j.neunet.2025.107226-
dc.identifier.scopuseid_2-s2.0-85217014616-
dc.identifier.volume185-
dc.identifier.eissn1879-2782-
dc.identifier.issnl0893-6080-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats