File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: DGCC-EB: Deep Global Context Construction With an Enabled Boundary for Land Use Mapping of CSMA

TitleDGCC-EB: Deep Global Context Construction With an Enabled Boundary for Land Use Mapping of CSMA
Authors
KeywordsGraph convolutional network (GCN)
high spatial resolution
land-use mapping (LUM)
transfer learning
vision transformer (ViT)
Issue Date2022
Citation
IEEE Transactions on Geoscience and Remote Sensing, 2022, v. 60, article no. 5634915 How to Cite?
AbstractLand use mapping (LUM) of a coal mining subsidence area (CMSA) is a significant task. The application of convolutional neural networks (CNNs) has become prevalent in LUM, which can achieve promising performances. However, CNNs cannot process irregular data; as a result, the boundary information is overlooked. The graph convolutional network (GCN) flexibly operates with irregular regions to capture the contextual relations among neighbors. However, the global context is not considered in the GCN. In this article, we develop the deep global context construction with enabled boundary (DGCC-EB) for the LUM of the CMSA. An original Google Earth image is partitioned into nonoverlapping processing units. The DGCC-EB extracts the preliminary features from the processing unit that are further divided into nonoverlapping superpixels with irregular edges. The superpixel features are generated and then embedded into the GCN and vision transformer (ViT). In the GCN, the graph convolution is applied to superpixel features; therefore, the boundary information of objects can be preserved. In the ViT, the multihead attention blocks and positional encoding build the global context among the superpixel features. The feature constraint is calculated to fuse the advantages of the features extracted from the GCN and ViT. To improve the LUM accuracy, the cross-entropy (CE) loss is calculated. The DGCC-EB integrates all modules into a whole end-to-end framework and is then optimized by a customized algorithm. The results of case studies show that the proposed DGCC-EB obtained the acceptable overall accuracy (OA) (89.06%/88.68%) and Kappa (0.86/0.87) values for Shouzhou city and Zezhou city, respectively.
Persistent Identifierhttp://hdl.handle.net/10722/329903
ISSN
2021 Impact Factor: 8.125
2020 SCImago Journal Rankings: 2.141
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhang, Hanchao-
dc.contributor.authorZang, Ning-
dc.contributor.authorCao, Yun-
dc.contributor.authorWang, Yuebin-
dc.contributor.authorZhang, Liqiang-
dc.contributor.authorHuang, Bo-
dc.contributor.authorTakis Mathiopoulos, P.-
dc.date.accessioned2023-08-09T03:36:19Z-
dc.date.available2023-08-09T03:36:19Z-
dc.date.issued2022-
dc.identifier.citationIEEE Transactions on Geoscience and Remote Sensing, 2022, v. 60, article no. 5634915-
dc.identifier.issn0196-2892-
dc.identifier.urihttp://hdl.handle.net/10722/329903-
dc.description.abstractLand use mapping (LUM) of a coal mining subsidence area (CMSA) is a significant task. The application of convolutional neural networks (CNNs) has become prevalent in LUM, which can achieve promising performances. However, CNNs cannot process irregular data; as a result, the boundary information is overlooked. The graph convolutional network (GCN) flexibly operates with irregular regions to capture the contextual relations among neighbors. However, the global context is not considered in the GCN. In this article, we develop the deep global context construction with enabled boundary (DGCC-EB) for the LUM of the CMSA. An original Google Earth image is partitioned into nonoverlapping processing units. The DGCC-EB extracts the preliminary features from the processing unit that are further divided into nonoverlapping superpixels with irregular edges. The superpixel features are generated and then embedded into the GCN and vision transformer (ViT). In the GCN, the graph convolution is applied to superpixel features; therefore, the boundary information of objects can be preserved. In the ViT, the multihead attention blocks and positional encoding build the global context among the superpixel features. The feature constraint is calculated to fuse the advantages of the features extracted from the GCN and ViT. To improve the LUM accuracy, the cross-entropy (CE) loss is calculated. The DGCC-EB integrates all modules into a whole end-to-end framework and is then optimized by a customized algorithm. The results of case studies show that the proposed DGCC-EB obtained the acceptable overall accuracy (OA) (89.06%/88.68%) and Kappa (0.86/0.87) values for Shouzhou city and Zezhou city, respectively.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Geoscience and Remote Sensing-
dc.subjectGraph convolutional network (GCN)-
dc.subjecthigh spatial resolution-
dc.subjectland-use mapping (LUM)-
dc.subjecttransfer learning-
dc.subjectvision transformer (ViT)-
dc.titleDGCC-EB: Deep Global Context Construction With an Enabled Boundary for Land Use Mapping of CSMA-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TGRS.2022.3224733-
dc.identifier.scopuseid_2-s2.0-85144057189-
dc.identifier.volume60-
dc.identifier.spagearticle no. 5634915-
dc.identifier.epagearticle no. 5634915-
dc.identifier.eissn1558-0644-
dc.identifier.isiWOS:000900003000004-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats