File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/WACV45572.2020.9093608
- Scopus: eid_2-s2.0-85085495807
- WOS: WOS:000578444803075
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: 3D semi-supervised learning with uncertainty-aware multi-view co-training
Title | 3D semi-supervised learning with uncertainty-aware multi-view co-training |
---|---|
Authors | |
Issue Date | 2020 |
Citation | Proceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020, 2020, p. 3635-3644 How to Cite? |
Abstract | While making a tremendous impact in various fields, deep neural networks usually require large amounts of la-beled data for training which are expensive to collect in many applications, especially in the medical domain. Un-labeled data, on the other hand, is much more abundant. Semi-supervised learning techniques, such as co-training, could provide a powerful tool to leverage unlabeled data. In this paper, we propose a novel framework, uncertainty-aware multi-view co-training (UMCT), to address semi-supervised learning on 3D data, such as volumetric data from medical imaging. In our work, co-training is achieved by exploiting multi-viewpoint consistency of 3D data. We generate different views by rotating or permuting the 3D data and utilize asymmetrical 3D kernels to encourage diversified features in different sub-networks. In addition, we propose an uncertainty-weighted label fusion mechanism to estimate the reliability of each view's prediction with Bayesian deep learning. As one view requires the supervision from other views in co-training, our self-adaptive approach computes a confidence score for the prediction of each unlabeled sample in order to assign a reliable pseudo label. Thus, our approach can take advantage of unlabeled data during training. We show the effectiveness of our proposed semi-supervised method on several public datasets from medical image segmentation tasks (NIH pancreas LiTS liver tumor dataset). Meanwhile, a fully-supervised method based on our approach achieved state-of-the-art performances on both the LiTS liver tumor segmentation and the Medical Segmentation Decathlon (MSD) challenge, demonstrating the robustness and value of our framework, even when fully supervised training is feasible. |
Persistent Identifier | http://hdl.handle.net/10722/299461 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Xia, Yingda | - |
dc.contributor.author | Liu, Fengze | - |
dc.contributor.author | Yang, Dong | - |
dc.contributor.author | Cai, Jinzheng | - |
dc.contributor.author | Yu, Lequan | - |
dc.contributor.author | Zhu, Zhuotun | - |
dc.contributor.author | Xu, Daguang | - |
dc.contributor.author | Yuille, Alan | - |
dc.contributor.author | Roth, Holger | - |
dc.date.accessioned | 2021-05-21T03:34:27Z | - |
dc.date.available | 2021-05-21T03:34:27Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | Proceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020, 2020, p. 3635-3644 | - |
dc.identifier.uri | http://hdl.handle.net/10722/299461 | - |
dc.description.abstract | While making a tremendous impact in various fields, deep neural networks usually require large amounts of la-beled data for training which are expensive to collect in many applications, especially in the medical domain. Un-labeled data, on the other hand, is much more abundant. Semi-supervised learning techniques, such as co-training, could provide a powerful tool to leverage unlabeled data. In this paper, we propose a novel framework, uncertainty-aware multi-view co-training (UMCT), to address semi-supervised learning on 3D data, such as volumetric data from medical imaging. In our work, co-training is achieved by exploiting multi-viewpoint consistency of 3D data. We generate different views by rotating or permuting the 3D data and utilize asymmetrical 3D kernels to encourage diversified features in different sub-networks. In addition, we propose an uncertainty-weighted label fusion mechanism to estimate the reliability of each view's prediction with Bayesian deep learning. As one view requires the supervision from other views in co-training, our self-adaptive approach computes a confidence score for the prediction of each unlabeled sample in order to assign a reliable pseudo label. Thus, our approach can take advantage of unlabeled data during training. We show the effectiveness of our proposed semi-supervised method on several public datasets from medical image segmentation tasks (NIH pancreas LiTS liver tumor dataset). Meanwhile, a fully-supervised method based on our approach achieved state-of-the-art performances on both the LiTS liver tumor segmentation and the Medical Segmentation Decathlon (MSD) challenge, demonstrating the robustness and value of our framework, even when fully supervised training is feasible. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020 | - |
dc.title | 3D semi-supervised learning with uncertainty-aware multi-view co-training | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/WACV45572.2020.9093608 | - |
dc.identifier.scopus | eid_2-s2.0-85085495807 | - |
dc.identifier.spage | 3635 | - |
dc.identifier.epage | 3644 | - |
dc.identifier.isi | WOS:000578444803075 | - |