File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: 3D semi-supervised learning with uncertainty-aware multi-view co-training

Title3D semi-supervised learning with uncertainty-aware multi-view co-training
Authors
Issue Date2020
Citation
Proceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020, 2020, p. 3635-3644 How to Cite?
AbstractWhile making a tremendous impact in various fields, deep neural networks usually require large amounts of la-beled data for training which are expensive to collect in many applications, especially in the medical domain. Un-labeled data, on the other hand, is much more abundant. Semi-supervised learning techniques, such as co-training, could provide a powerful tool to leverage unlabeled data. In this paper, we propose a novel framework, uncertainty-aware multi-view co-training (UMCT), to address semi-supervised learning on 3D data, such as volumetric data from medical imaging. In our work, co-training is achieved by exploiting multi-viewpoint consistency of 3D data. We generate different views by rotating or permuting the 3D data and utilize asymmetrical 3D kernels to encourage diversified features in different sub-networks. In addition, we propose an uncertainty-weighted label fusion mechanism to estimate the reliability of each view's prediction with Bayesian deep learning. As one view requires the supervision from other views in co-training, our self-adaptive approach computes a confidence score for the prediction of each unlabeled sample in order to assign a reliable pseudo label. Thus, our approach can take advantage of unlabeled data during training. We show the effectiveness of our proposed semi-supervised method on several public datasets from medical image segmentation tasks (NIH pancreas LiTS liver tumor dataset). Meanwhile, a fully-supervised method based on our approach achieved state-of-the-art performances on both the LiTS liver tumor segmentation and the Medical Segmentation Decathlon (MSD) challenge, demonstrating the robustness and value of our framework, even when fully supervised training is feasible.
Persistent Identifierhttp://hdl.handle.net/10722/299461
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorXia, Yingda-
dc.contributor.authorLiu, Fengze-
dc.contributor.authorYang, Dong-
dc.contributor.authorCai, Jinzheng-
dc.contributor.authorYu, Lequan-
dc.contributor.authorZhu, Zhuotun-
dc.contributor.authorXu, Daguang-
dc.contributor.authorYuille, Alan-
dc.contributor.authorRoth, Holger-
dc.date.accessioned2021-05-21T03:34:27Z-
dc.date.available2021-05-21T03:34:27Z-
dc.date.issued2020-
dc.identifier.citationProceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020, 2020, p. 3635-3644-
dc.identifier.urihttp://hdl.handle.net/10722/299461-
dc.description.abstractWhile making a tremendous impact in various fields, deep neural networks usually require large amounts of la-beled data for training which are expensive to collect in many applications, especially in the medical domain. Un-labeled data, on the other hand, is much more abundant. Semi-supervised learning techniques, such as co-training, could provide a powerful tool to leverage unlabeled data. In this paper, we propose a novel framework, uncertainty-aware multi-view co-training (UMCT), to address semi-supervised learning on 3D data, such as volumetric data from medical imaging. In our work, co-training is achieved by exploiting multi-viewpoint consistency of 3D data. We generate different views by rotating or permuting the 3D data and utilize asymmetrical 3D kernels to encourage diversified features in different sub-networks. In addition, we propose an uncertainty-weighted label fusion mechanism to estimate the reliability of each view's prediction with Bayesian deep learning. As one view requires the supervision from other views in co-training, our self-adaptive approach computes a confidence score for the prediction of each unlabeled sample in order to assign a reliable pseudo label. Thus, our approach can take advantage of unlabeled data during training. We show the effectiveness of our proposed semi-supervised method on several public datasets from medical image segmentation tasks (NIH pancreas LiTS liver tumor dataset). Meanwhile, a fully-supervised method based on our approach achieved state-of-the-art performances on both the LiTS liver tumor segmentation and the Medical Segmentation Decathlon (MSD) challenge, demonstrating the robustness and value of our framework, even when fully supervised training is feasible.-
dc.languageeng-
dc.relation.ispartofProceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020-
dc.title3D semi-supervised learning with uncertainty-aware multi-view co-training-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/WACV45572.2020.9093608-
dc.identifier.scopuseid_2-s2.0-85085495807-
dc.identifier.spage3635-
dc.identifier.epage3644-
dc.identifier.isiWOS:000578444803075-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats