File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Hybrid Graph Convolutional Network With Online Masked Autoencoder for Robust Multimodal Cancer Survival Prediction

TitleHybrid Graph Convolutional Network With Online Masked Autoencoder for Robust Multimodal Cancer Survival Prediction
Authors
Keywordsdecision fusion
graph convolutional network
hypergraph convolutional network
masked autoencoder
multi-modal learning
Survival prediction
Issue Date6-Mar-2023
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Medical Imaging, 2023, v. 42, n. 8, p. 2462-2473 How to Cite?
AbstractCancer survival prediction requires exploiting related multimodal information (e.g., pathological, clinical and genomic features, etc.) and it is even more challenging in clinical practices due to the incompleteness of patient's multimodal data. Furthermore, existing methods lack sufficient intra- and inter-modal interactions, and suffer from significant performance degradation caused by missing modalities. This manuscript proposes a novel hybrid graph convolutional network, entitled HGCN, which is equipped with an online masked autoencoder paradigm for robust multimodal cancer survival prediction. Particularly, we pioneer modeling the patient's multimodal data into flexible and interpretable multimodal graphs with modality-specific preprocessing. HGCN integrates the advantages of graph convolutional networks (GCNs) and a hypergraph convolutional network (HCN) through node message passing and a hyperedge mixing mechanism to facilitate intra-modal and inter-modal interactions between multimodal graphs. With HGCN, the potential for multimodal data to create more reliable predictions of patient's survival risk is dramatically increased compared to prior methods. Most importantly, to compensate for missing patient modalities in clinical scenarios, we incorporated an online masked autoencoder paradigm into HGCN, which can effectively capture intrinsic dependence between modalities and seamlessly generate missing hyperedges for model inference. Extensive experiments and analysis on six cancer cohorts from TCGA show that our method significantly outperforms the state-of-the-arts in both complete and missing modal settings. Our codes are made available at https://github.com/lin-lcx/HGCN.
Persistent Identifierhttp://hdl.handle.net/10722/331451
ISSN
2023 Impact Factor: 8.9
2023 SCImago Journal Rankings: 3.703
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorHou, WT-
dc.contributor.authorLin, CX-
dc.contributor.authorYu, LQ-
dc.contributor.authorQin, J-
dc.contributor.authorYu, RS-
dc.contributor.authorWang, LS-
dc.date.accessioned2023-09-21T06:55:51Z-
dc.date.available2023-09-21T06:55:51Z-
dc.date.issued2023-03-06-
dc.identifier.citationIEEE Transactions on Medical Imaging, 2023, v. 42, n. 8, p. 2462-2473-
dc.identifier.issn0278-0062-
dc.identifier.urihttp://hdl.handle.net/10722/331451-
dc.description.abstractCancer survival prediction requires exploiting related multimodal information (e.g., pathological, clinical and genomic features, etc.) and it is even more challenging in clinical practices due to the incompleteness of patient's multimodal data. Furthermore, existing methods lack sufficient intra- and inter-modal interactions, and suffer from significant performance degradation caused by missing modalities. This manuscript proposes a novel hybrid graph convolutional network, entitled HGCN, which is equipped with an online masked autoencoder paradigm for robust multimodal cancer survival prediction. Particularly, we pioneer modeling the patient's multimodal data into flexible and interpretable multimodal graphs with modality-specific preprocessing. HGCN integrates the advantages of graph convolutional networks (GCNs) and a hypergraph convolutional network (HCN) through node message passing and a hyperedge mixing mechanism to facilitate intra-modal and inter-modal interactions between multimodal graphs. With HGCN, the potential for multimodal data to create more reliable predictions of patient's survival risk is dramatically increased compared to prior methods. Most importantly, to compensate for missing patient modalities in clinical scenarios, we incorporated an online masked autoencoder paradigm into HGCN, which can effectively capture intrinsic dependence between modalities and seamlessly generate missing hyperedges for model inference. Extensive experiments and analysis on six cancer cohorts from TCGA show that our method significantly outperforms the state-of-the-arts in both complete and missing modal settings. Our codes are made available at https://github.com/lin-lcx/HGCN.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Medical Imaging-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectdecision fusion-
dc.subjectgraph convolutional network-
dc.subjecthypergraph convolutional network-
dc.subjectmasked autoencoder-
dc.subjectmulti-modal learning-
dc.subjectSurvival prediction-
dc.titleHybrid Graph Convolutional Network With Online Masked Autoencoder for Robust Multimodal Cancer Survival Prediction-
dc.typeArticle-
dc.identifier.doi10.1109/TMI.2023.3253760-
dc.identifier.pmid37028064-
dc.identifier.scopuseid_2-s2.0-85149881396-
dc.identifier.volume42-
dc.identifier.issue8-
dc.identifier.spage2462-
dc.identifier.epage2473-
dc.identifier.eissn1558-254X-
dc.identifier.isiWOS:001042097000027-
dc.publisher.placePISCATAWAY-
dc.identifier.issnl0278-0062-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats