File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Uncertainty-aware multi-view co-training for semi-supervised medical image segmentation and domain adaptation

TitleUncertainty-aware multi-view co-training for semi-supervised medical image segmentation and domain adaptation
Authors
KeywordsUncertainty estimation
Segmentation
Domain adaptation
Semi-supervised learning
Issue Date2020
Citation
Medical Image Analysis, 2020, v. 65, article no. 101766 How to Cite?
AbstractAlthough having achieved great success in medical image segmentation, deep learning-based approaches usually require large amounts of well-annotated data, which can be extremely expensive in the field of medical image analysis. Unlabeled data, on the other hand, is much easier to acquire. Semi-supervised learning and unsupervised domain adaptation both take the advantage of unlabeled data, and they are closely related to each other. In this paper, we propose uncertainty-aware multi-view co-training (UMCT), a unified framework that addresses these two tasks for volumetric medical image segmentation. Our framework is capable of efficiently utilizing unlabeled data for better performance. We firstly rotate and permute the 3D volumes into multiple views and train a 3D deep network on each view. We then apply co-training by enforcing multi-view consistency on unlabeled data, where an uncertainty estimation of each view is utilized to achieve accurate labeling. Experiments on the NIH pancreas segmentation dataset and a multi-organ segmentation dataset show state-of-the-art performance of the proposed framework on semi-supervised medical image segmentation. Under unsupervised domain adaptation settings, we validate the effectiveness of this work by adapting our multi-organ segmentation model to two pathological organs from the Medical Segmentation Decathlon Datasets. Additionally, we show that our UMCT-DA model can even effectively handle the challenging situation where labeled source data is inaccessible, demonstrating strong potentials for real-world applications.
Persistent Identifierhttp://hdl.handle.net/10722/299463
ISSN
2023 Impact Factor: 10.7
2023 SCImago Journal Rankings: 4.112
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorXia, Yingda-
dc.contributor.authorYang, Dong-
dc.contributor.authorYu, Zhiding-
dc.contributor.authorLiu, Fengze-
dc.contributor.authorCai, Jinzheng-
dc.contributor.authorYu, Lequan-
dc.contributor.authorZhu, Zhuotun-
dc.contributor.authorXu, Daguang-
dc.contributor.authorYuille, Alan-
dc.contributor.authorRoth, Holger-
dc.date.accessioned2021-05-21T03:34:27Z-
dc.date.available2021-05-21T03:34:27Z-
dc.date.issued2020-
dc.identifier.citationMedical Image Analysis, 2020, v. 65, article no. 101766-
dc.identifier.issn1361-8415-
dc.identifier.urihttp://hdl.handle.net/10722/299463-
dc.description.abstractAlthough having achieved great success in medical image segmentation, deep learning-based approaches usually require large amounts of well-annotated data, which can be extremely expensive in the field of medical image analysis. Unlabeled data, on the other hand, is much easier to acquire. Semi-supervised learning and unsupervised domain adaptation both take the advantage of unlabeled data, and they are closely related to each other. In this paper, we propose uncertainty-aware multi-view co-training (UMCT), a unified framework that addresses these two tasks for volumetric medical image segmentation. Our framework is capable of efficiently utilizing unlabeled data for better performance. We firstly rotate and permute the 3D volumes into multiple views and train a 3D deep network on each view. We then apply co-training by enforcing multi-view consistency on unlabeled data, where an uncertainty estimation of each view is utilized to achieve accurate labeling. Experiments on the NIH pancreas segmentation dataset and a multi-organ segmentation dataset show state-of-the-art performance of the proposed framework on semi-supervised medical image segmentation. Under unsupervised domain adaptation settings, we validate the effectiveness of this work by adapting our multi-organ segmentation model to two pathological organs from the Medical Segmentation Decathlon Datasets. Additionally, we show that our UMCT-DA model can even effectively handle the challenging situation where labeled source data is inaccessible, demonstrating strong potentials for real-world applications.-
dc.languageeng-
dc.relation.ispartofMedical Image Analysis-
dc.subjectUncertainty estimation-
dc.subjectSegmentation-
dc.subjectDomain adaptation-
dc.subjectSemi-supervised learning-
dc.titleUncertainty-aware multi-view co-training for semi-supervised medical image segmentation and domain adaptation-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1016/j.media.2020.101766-
dc.identifier.pmid32623276-
dc.identifier.scopuseid_2-s2.0-85087275458-
dc.identifier.volume65-
dc.identifier.spagearticle no. 101766-
dc.identifier.epagearticle no. 101766-
dc.identifier.eissn1361-8423-
dc.identifier.isiWOS:000567866400010-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats