File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement and Gated Fusion

TitleRobust Multimodal Brain Tumor Segmentation via Feature Disentanglement and Gated Fusion
Authors
Issue Date2019
Citation
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2019, v. 11766 LNCS, p. 447-456 How to Cite?
AbstractAccurate medical image segmentation commonly requires effective learning of the complementary information from multimodal data. However, in clinical practice, we often encounter the problem of missing imaging modalities. We tackle this challenge and propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities. Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code, which uniquely sticks to each modality, and the modality-invariant content code, which absorbs multimodal information for the segmentation task. With enhanced modality-invariance, the disentangled content code from each modality is fused into a shared representation which gains robustness to missing data. The fusion is achieved via a learning-based strategy to gate the contribution of different modalities at different locations. We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset. With competitive performance to the state-of-the-art approaches for full modality, our method achieves outstanding robustness under various missing modality(ies) situations, significantly exceeding the state-of-the-art method by over in average for Dice on whole tumor segmentation.
Persistent Identifierhttp://hdl.handle.net/10722/349375
ISSN
2023 SCImago Journal Rankings: 0.606
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorChen, Cheng-
dc.contributor.authorDou, Qi-
dc.contributor.authorJin, Yueming-
dc.contributor.authorChen, Hao-
dc.contributor.authorQin, Jing-
dc.contributor.authorHeng, Pheng Ann-
dc.date.accessioned2024-10-17T06:58:07Z-
dc.date.available2024-10-17T06:58:07Z-
dc.date.issued2019-
dc.identifier.citationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2019, v. 11766 LNCS, p. 447-456-
dc.identifier.issn0302-9743-
dc.identifier.urihttp://hdl.handle.net/10722/349375-
dc.description.abstractAccurate medical image segmentation commonly requires effective learning of the complementary information from multimodal data. However, in clinical practice, we often encounter the problem of missing imaging modalities. We tackle this challenge and propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities. Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code, which uniquely sticks to each modality, and the modality-invariant content code, which absorbs multimodal information for the segmentation task. With enhanced modality-invariance, the disentangled content code from each modality is fused into a shared representation which gains robustness to missing data. The fusion is achieved via a learning-based strategy to gate the contribution of different modalities at different locations. We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset. With competitive performance to the state-of-the-art approaches for full modality, our method achieves outstanding robustness under various missing modality(ies) situations, significantly exceeding the state-of-the-art method by over in average for Dice on whole tumor segmentation.-
dc.languageeng-
dc.relation.ispartofLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)-
dc.titleRobust Multimodal Brain Tumor Segmentation via Feature Disentanglement and Gated Fusion-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/978-3-030-32248-9_50-
dc.identifier.scopuseid_2-s2.0-85075692071-
dc.identifier.volume11766 LNCS-
dc.identifier.spage447-
dc.identifier.epage456-
dc.identifier.eissn1611-3349-
dc.identifier.isiWOS:000548733600050-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats