File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Contrastive Learning for Urban Land Cover Classification With Multimodal Siamese Network

TitleContrastive Learning for Urban Land Cover Classification With Multimodal Siamese Network
Authors
KeywordsContrastive learning
optical and SAR data
self-supervised learning
urban land cover classification
Issue Date1-Jan-2024
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Geoscience and Remote Sensing Letters, 2024, v. 21 How to Cite?
AbstractThe Earth observation era has bestowed dividends upon supervised land cover classification based on deep learning and optical data. However, limitations, such as insufficient spectral information and reduced quality during inclement weather for optical data, coupled with the need for extensive labeled samples, impede accurate classification. This letter harnesses multimodal images with deep contrastive learning to reduce reliance on labeled data and classify land covers. By employing a well-designed contrastive learning method with triangular similarity loss, our model can learn effective multimodal features without labeled samples. Moreover, the learned features are fused at the early feature level and used for the downstream classification task with fewer labeled samples. Experimental results demonstrate the benefits of incorporating multiple modalities, highlighting the potential of combining multimodal image analysis and contrastive learning for land cover classification with limited labeled samples.
Persistent Identifierhttp://hdl.handle.net/10722/361961
ISSN
2023 Impact Factor: 4.0
2023 SCImago Journal Rankings: 1.248

 

DC FieldValueLanguage
dc.contributor.authorLiu, Rui-
dc.contributor.authorLing, Jing-
dc.contributor.authorLin, Yinyi-
dc.contributor.authorZhang, Hongsheng-
dc.date.accessioned2025-09-18T00:35:50Z-
dc.date.available2025-09-18T00:35:50Z-
dc.date.issued2024-01-01-
dc.identifier.citationIEEE Geoscience and Remote Sensing Letters, 2024, v. 21-
dc.identifier.issn1545-598X-
dc.identifier.urihttp://hdl.handle.net/10722/361961-
dc.description.abstractThe Earth observation era has bestowed dividends upon supervised land cover classification based on deep learning and optical data. However, limitations, such as insufficient spectral information and reduced quality during inclement weather for optical data, coupled with the need for extensive labeled samples, impede accurate classification. This letter harnesses multimodal images with deep contrastive learning to reduce reliance on labeled data and classify land covers. By employing a well-designed contrastive learning method with triangular similarity loss, our model can learn effective multimodal features without labeled samples. Moreover, the learned features are fused at the early feature level and used for the downstream classification task with fewer labeled samples. Experimental results demonstrate the benefits of incorporating multiple modalities, highlighting the potential of combining multimodal image analysis and contrastive learning for land cover classification with limited labeled samples.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Geoscience and Remote Sensing Letters-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectContrastive learning-
dc.subjectoptical and SAR data-
dc.subjectself-supervised learning-
dc.subjecturban land cover classification-
dc.titleContrastive Learning for Urban Land Cover Classification With Multimodal Siamese Network-
dc.typeArticle-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.1109/LGRS.2024.3442434-
dc.identifier.scopuseid_2-s2.0-85201299323-
dc.identifier.volume21-
dc.identifier.eissn1558-0571-
dc.identifier.issnl1545-598X-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats