File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Unsupervised cross-modality domain adaptation of convnets for biomedical image segmentations with adversarial loss

TitleUnsupervised cross-modality domain adaptation of convnets for biomedical image segmentations with adversarial loss
Authors
Issue Date2018
Citation
IJCAI International Joint Conference on Artificial Intelligence, 2018, v. 2018-July, p. 691-697 How to Cite?
AbstractConvolutional networks (ConvNets) have achieved great successes in various challenging vision tasks. However, the performance of ConvNets would degrade when encountering the domain shift. The domain adaptation is more significant while challenging in the field of biomedical image analysis, where cross-modality data have largely different distributions. Given that annotating the medical data is especially expensive, the supervised transfer learning approaches are not quite optimal. In this paper, we propose an unsupervised domain adaptation framework with adversarial learning for cross-modality biomedical image segmentations. Specifically, our model is based on a dilated fully convolutional network for pixel-wise prediction. Moreover, we build a plug-and-play domain adaptation module (DAM) to map the target input to features which are aligned with source domain feature space. A domain critic module (DCM) is set up for discriminating the feature space of both domains. We optimize the DAM and DCM via an adversarial loss without using any target domain label. Our proposed method is validated by adapting a ConvNet trained with MRI images to unpaired CT data for cardiac structures segmentations, and achieved very promising results.
Persistent Identifierhttp://hdl.handle.net/10722/349280
ISSN
2020 SCImago Journal Rankings: 0.649

 

DC FieldValueLanguage
dc.contributor.authorDou, Qi-
dc.contributor.authorOuyang, Cheng-
dc.contributor.authorChen, Cheng-
dc.contributor.authorChen, Hao-
dc.contributor.authorHeng, Pheng Ann-
dc.date.accessioned2024-10-17T06:57:29Z-
dc.date.available2024-10-17T06:57:29Z-
dc.date.issued2018-
dc.identifier.citationIJCAI International Joint Conference on Artificial Intelligence, 2018, v. 2018-July, p. 691-697-
dc.identifier.issn1045-0823-
dc.identifier.urihttp://hdl.handle.net/10722/349280-
dc.description.abstractConvolutional networks (ConvNets) have achieved great successes in various challenging vision tasks. However, the performance of ConvNets would degrade when encountering the domain shift. The domain adaptation is more significant while challenging in the field of biomedical image analysis, where cross-modality data have largely different distributions. Given that annotating the medical data is especially expensive, the supervised transfer learning approaches are not quite optimal. In this paper, we propose an unsupervised domain adaptation framework with adversarial learning for cross-modality biomedical image segmentations. Specifically, our model is based on a dilated fully convolutional network for pixel-wise prediction. Moreover, we build a plug-and-play domain adaptation module (DAM) to map the target input to features which are aligned with source domain feature space. A domain critic module (DCM) is set up for discriminating the feature space of both domains. We optimize the DAM and DCM via an adversarial loss without using any target domain label. Our proposed method is validated by adapting a ConvNet trained with MRI images to unpaired CT data for cardiac structures segmentations, and achieved very promising results.-
dc.languageeng-
dc.relation.ispartofIJCAI International Joint Conference on Artificial Intelligence-
dc.titleUnsupervised cross-modality domain adaptation of convnets for biomedical image segmentations with adversarial loss-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.24963/ijcai.2018/96-
dc.identifier.scopuseid_2-s2.0-85054552245-
dc.identifier.volume2018-July-
dc.identifier.spage691-
dc.identifier.epage697-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats