File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Robust object detection in extreme construction conditions

TitleRobust object detection in extreme construction conditions
Authors
KeywordsConstruction industry
Extreme conditions
Extreme construction dataset
Image adaptation
Neural style transfer
Robust object detection
Issue Date1-Sep-2024
PublisherElsevier
Citation
Automation in Construction, 2024, v. 165 How to Cite?
Abstract

Current construction object detection models are vulnerable in complex conditions, as they are trained on conventional data and lack robustness in extreme situations. The lack of extreme data with relevant annotations worsens this situation. A new end-to-end unified image adaptation You-Only-Look-Once-v5 (UIA-YOLOv5) model is presented for robust object detection in five extreme conditions: low/intense light, fog, dust, and rain. The UIA-YOLOv5 adaptively enhances the input image to make image content visually clear and then feeds the enhanced image to the YOLOv5 for object detection. Sufficient extreme images are synthesized via the neural style transfer (NST) and mixed with conventional data for model training to reduce domain shift. An extreme construction dataset (ExtCon) containing 506 images labeled with 13 objects is constructed for real-world evaluation. Results show that the UIA-YOLOv5 keeps the same performance as the YOLOv5 on conventional data but is more robust to extreme data with an 8.21% mAP05 improvement.


Persistent Identifierhttp://hdl.handle.net/10722/361849
ISSN
2023 Impact Factor: 9.6
2023 SCImago Journal Rankings: 2.626

 

DC FieldValueLanguage
dc.contributor.authorDing, Yuexiong-
dc.contributor.authorZhang, Ming-
dc.contributor.authorPan, Jia-
dc.contributor.authorHu, Jinxing-
dc.contributor.authorLuo, Xiaowei-
dc.date.accessioned2025-09-17T00:31:07Z-
dc.date.available2025-09-17T00:31:07Z-
dc.date.issued2024-09-01-
dc.identifier.citationAutomation in Construction, 2024, v. 165-
dc.identifier.issn0926-5805-
dc.identifier.urihttp://hdl.handle.net/10722/361849-
dc.description.abstract<p>Current construction object detection models are vulnerable in complex conditions, as they are trained on conventional data and lack robustness in extreme situations. The lack of extreme data with relevant annotations worsens this situation. A new end-to-end unified image adaptation You-Only-Look-Once-v5 (UIA-YOLOv5) model is presented for robust object detection in five extreme conditions: low/intense light, fog, dust, and rain. The UIA-YOLOv5 adaptively enhances the input image to make image content visually clear and then feeds the enhanced image to the YOLOv5 for object detection. Sufficient extreme images are synthesized via the neural style transfer (NST) and mixed with conventional data for model training to reduce domain shift. An extreme construction dataset (ExtCon) containing 506 images labeled with 13 objects is constructed for real-world evaluation. Results show that the UIA-YOLOv5 keeps the same performance as the YOLOv5 on conventional data but is more robust to extreme data with an 8.21% mAP05 improvement.</p>-
dc.languageeng-
dc.publisherElsevier-
dc.relation.ispartofAutomation in Construction-
dc.subjectConstruction industry-
dc.subjectExtreme conditions-
dc.subjectExtreme construction dataset-
dc.subjectImage adaptation-
dc.subjectNeural style transfer-
dc.subjectRobust object detection-
dc.titleRobust object detection in extreme construction conditions-
dc.typeArticle-
dc.identifier.doi10.1016/j.autcon.2024.105487-
dc.identifier.scopuseid_2-s2.0-85195815805-
dc.identifier.volume165-
dc.identifier.eissn1872-7891-
dc.identifier.issnl0926-5805-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats