File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/JBHI.2025.3539712
- Scopus: eid_2-s2.0-85217526643
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Completed Feature Disentanglement Learning for Multimodal MRIs Analysis
Title | Completed Feature Disentanglement Learning for Multimodal MRIs Analysis |
---|---|
Authors | |
Keywords | Dynamic fusion Feature disentanglement MRIs Multimodal learning |
Issue Date | 6-Feb-2025 |
Publisher | IEEE |
Citation | EEE Journal of Biomedical and Health Informatics, 2025 How to Cite? |
Abstract | Multimodal MRIs play a crucial role in clinical diagnosis and treatment. Feature disentanglement (FD)-based methods, aiming at learning superior feature representations for multimodal data analysis, have achieved significant success in multimodal learning (MML). Typically, existing FD-based methods separate multimodal data into modality-shared and modality-specific features, and employ concatenation or attention mechanisms to integrate these features. However, our preliminary experiments indicate that these methods could lead to a loss of shared information among subsets of modalities when the inputs contain more than two modalities, and such information is critical for prediction accuracy. Furthermore, these methods do not adequately interpret the relationships between the decoupled features at the fusion stage. To address these limitations, we propose a novel Complete Feature Disentanglement (CFD) strategy that recovers the lost information during feature decoupling. Specifically, the CFD strategy not only identifies modality-shared and modality-specific features, but also decouples shared features among subsets of multimodal inputs, termed as modality-partial-shared features. We further introduce a new Dynamic Mixture-of-Experts Fusion (DMF) module that dynamically integrates these decoupled features, by explicitly learning the local-global relationships among the features. The effectiveness of our approach is validated through classification tasks on three multimodal MRI datasets. Extensive experimental results demonstrate that our approach outperforms other state-of-the-art MML methods with obvious margins, showcasing its superior performance. |
Persistent Identifier | http://hdl.handle.net/10722/355176 |
ISSN | 2023 Impact Factor: 6.7 2023 SCImago Journal Rankings: 1.964 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, Tianling | - |
dc.contributor.author | Liu, Hongying | - |
dc.contributor.author | Shang, Fanhua | - |
dc.contributor.author | Yu, Lequan | - |
dc.contributor.author | Han, Tong | - |
dc.contributor.author | Wan, Liang | - |
dc.date.accessioned | 2025-03-28T00:35:38Z | - |
dc.date.available | 2025-03-28T00:35:38Z | - |
dc.date.issued | 2025-02-06 | - |
dc.identifier.citation | EEE Journal of Biomedical and Health Informatics, 2025 | - |
dc.identifier.issn | 2168-2194 | - |
dc.identifier.uri | http://hdl.handle.net/10722/355176 | - |
dc.description.abstract | <p>Multimodal MRIs play a crucial role in clinical diagnosis and treatment. Feature disentanglement (FD)-based methods, aiming at learning superior feature representations for multimodal data analysis, have achieved significant success in multimodal learning (MML). Typically, existing FD-based methods separate multimodal data into modality-shared and modality-specific features, and employ concatenation or attention mechanisms to integrate these features. However, our preliminary experiments indicate that these methods could lead to a loss of shared information among subsets of modalities when the inputs contain more than two modalities, and such information is critical for prediction accuracy. Furthermore, these methods do not adequately interpret the relationships between the decoupled features at the fusion stage. To address these limitations, we propose a novel Complete Feature Disentanglement (CFD) strategy that recovers the lost information during feature decoupling. Specifically, the CFD strategy not only identifies modality-shared and modality-specific features, but also decouples shared features among subsets of multimodal inputs, termed as modality-partial-shared features. We further introduce a new Dynamic Mixture-of-Experts Fusion (DMF) module that dynamically integrates these decoupled features, by explicitly learning the local-global relationships among the features. The effectiveness of our approach is validated through classification tasks on three multimodal MRI datasets. Extensive experimental results demonstrate that our approach outperforms other state-of-the-art MML methods with obvious margins, showcasing its superior performance.</p> | - |
dc.language | eng | - |
dc.publisher | IEEE | - |
dc.relation.ispartof | EEE Journal of Biomedical and Health Informatics | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject | Dynamic fusion | - |
dc.subject | Feature disentanglement | - |
dc.subject | MRIs | - |
dc.subject | Multimodal learning | - |
dc.title | Completed Feature Disentanglement Learning for Multimodal MRIs Analysis | - |
dc.type | Article | - |
dc.identifier.doi | 10.1109/JBHI.2025.3539712 | - |
dc.identifier.scopus | eid_2-s2.0-85217526643 | - |
dc.identifier.eissn | 2168-2208 | - |
dc.identifier.issnl | 2168-2194 | - |