File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Building Extraction based on SE-Unet

TitleBuilding Extraction based on SE-Unet
Authors
KeywordsBuilding extraction
Convolutional neural network
Deep learning
High spatial resolution remote sensing imagery
Loss function
Massachusetts building dataset
SE-Unet
Issue Date2019
Citation
Journal of Geo-Information Science, 2019, v. 21, n. 11, p. 1779-1789 How to Cite?
AbstractAutomatic extraction of urban buildings has great importance in applications like urban planning and disaster prevention. In this regard, high-resolution remote sensing imagery contain sufficient information and are ideal data for precise extraction. Traditional approaches (excluding visual interpretation) demand researchers to manually design features to describe buildings and distinguishing them from other objects. Unfortunately, the complexity in high-resolution imagery makes these features fragile due to the change of sensors, imaging conditions, and locations. Recently, the convolutional neural networks, which succeeded in many visual applications including image segmentation, were used to extract buildings in high spatial resolution remote sensing imagery and achieved desirable results. However, convolutional neural networks still have much to improve regarding especially network architecture and loss functions. This paper proposed a convolutional neural network SE-Unet. It is based on U-Net architecture and employs squeeze-and-excitation modules in its encoder. The squeeze-and-excitation modules activate useful features and deactivate useless features in an adaptively weighted manner, which can remarkably increase network capacity with only a few extra parameters and memory cost. The decoder of SE-Unet concatenates corresponding features in the encoder to recover spatial information, as the U-Net does. Dice and cross-entropy loss function was applied to train the network and successfully alleviated the sample imbalance problem in building extraction. All experiments were performed on the Massachusetts building dataset for evaluation. Comparing to SegNet, LinkNet, U-Net, and other networks, SE-Unet showed the best results in all evaluation metrics, achieving 0.8704, 0.8496, 0.8599, and 0.9472 in terms of precision, recall, F1-score, and overall accuracy, respectively. Also, SE-Unet presented even better precision in extracting buildings that vary in size and shape. Our findings prove that squeeze-and-excitation modules can effectively strengthen network capability, and that dice and cross-entropy loss function can be useful in other sample imbalanced situations that involve high-resolution remote sensing imagery.
Persistent Identifierhttp://hdl.handle.net/10722/329800
ISSN
2023 SCImago Journal Rankings: 0.330

 

DC FieldValueLanguage
dc.contributor.authorLiu, Hao-
dc.contributor.authorLuo, Jiancheng-
dc.contributor.authorHuang, Bo-
dc.contributor.authorYang, Haiping-
dc.contributor.authorHu, Xiaodong-
dc.contributor.authorXu, Nan-
dc.contributor.authorXia, Liegang-
dc.date.accessioned2023-08-09T03:35:25Z-
dc.date.available2023-08-09T03:35:25Z-
dc.date.issued2019-
dc.identifier.citationJournal of Geo-Information Science, 2019, v. 21, n. 11, p. 1779-1789-
dc.identifier.issn1560-8999-
dc.identifier.urihttp://hdl.handle.net/10722/329800-
dc.description.abstractAutomatic extraction of urban buildings has great importance in applications like urban planning and disaster prevention. In this regard, high-resolution remote sensing imagery contain sufficient information and are ideal data for precise extraction. Traditional approaches (excluding visual interpretation) demand researchers to manually design features to describe buildings and distinguishing them from other objects. Unfortunately, the complexity in high-resolution imagery makes these features fragile due to the change of sensors, imaging conditions, and locations. Recently, the convolutional neural networks, which succeeded in many visual applications including image segmentation, were used to extract buildings in high spatial resolution remote sensing imagery and achieved desirable results. However, convolutional neural networks still have much to improve regarding especially network architecture and loss functions. This paper proposed a convolutional neural network SE-Unet. It is based on U-Net architecture and employs squeeze-and-excitation modules in its encoder. The squeeze-and-excitation modules activate useful features and deactivate useless features in an adaptively weighted manner, which can remarkably increase network capacity with only a few extra parameters and memory cost. The decoder of SE-Unet concatenates corresponding features in the encoder to recover spatial information, as the U-Net does. Dice and cross-entropy loss function was applied to train the network and successfully alleviated the sample imbalance problem in building extraction. All experiments were performed on the Massachusetts building dataset for evaluation. Comparing to SegNet, LinkNet, U-Net, and other networks, SE-Unet showed the best results in all evaluation metrics, achieving 0.8704, 0.8496, 0.8599, and 0.9472 in terms of precision, recall, F1-score, and overall accuracy, respectively. Also, SE-Unet presented even better precision in extracting buildings that vary in size and shape. Our findings prove that squeeze-and-excitation modules can effectively strengthen network capability, and that dice and cross-entropy loss function can be useful in other sample imbalanced situations that involve high-resolution remote sensing imagery.-
dc.languageeng-
dc.relation.ispartofJournal of Geo-Information Science-
dc.subjectBuilding extraction-
dc.subjectConvolutional neural network-
dc.subjectDeep learning-
dc.subjectHigh spatial resolution remote sensing imagery-
dc.subjectLoss function-
dc.subjectMassachusetts building dataset-
dc.subjectSE-Unet-
dc.titleBuilding Extraction based on SE-Unet-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.12082/dqxxkx.2019.190285-
dc.identifier.scopuseid_2-s2.0-85128134362-
dc.identifier.volume21-
dc.identifier.issue11-
dc.identifier.spage1779-
dc.identifier.epage1789-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats