File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Multi-organ Segmentation from Partially Labeled and Unaligned Multi-modal MRI in Thyroid-associated Orbitopathy

TitleMulti-organ Segmentation from Partially Labeled and Unaligned Multi-modal MRI in Thyroid-associated Orbitopathy
Authors
KeywordsMulti-modal segmentation
partial labels
thyroid-associated orbitopathy
Issue Date25-Feb-2025
PublisherIEEE
Citation
IEEE Journal of Biomedical and Health Informatics, 2025, v. 29, n. 6, p. 4161-4172 How to Cite?
AbstractThyroid-associated orbitopathy (TAO) is a prevalent inflammatory autoimmune disorder, leading to orbital disfigurement and visual disability. Automatic comprehensive segmentation tailored for quantitative multi-modal MRI assessment of TAO holds enormous promise but is still lacking. In this paper, we propose a novel method, named cross-modal attentive self-training (CMAST), for the multi-organ segmentation in TAO using partially labeled and unaligned multi-modal MRI data. Our method first introduces a dedicatedly designed cross-modal pseudo label self-training scheme, which leverages self-training to refine the initial pseudo labels generated by cross-modal registration, so as to complete the label sets for comprehensive segmentation. With the obtained pseudo labels, we further devise a learnable attentive fusion module to aggregate multi-modal knowledge based on learned cross-modal feature attention, which relaxes the requirement of pixel-wise alignment across modalities. A prototypical contrastive learning loss is further incorporated to facilitate cross-modal feature alignment. We evaluate our method on a large clinical TAO cohort with 100 cases of multi-modal orbital MRI. The experimental results demonstrate the promising performance of our method in achieving comprehensive segmentation of TAO-affected organs on both T1 and T1c modalities, outperforming previous methods by a large margin. Code will be released upon acceptance.
Persistent Identifierhttp://hdl.handle.net/10722/360808
ISSN
2023 Impact Factor: 6.7
2023 SCImago Journal Rankings: 1.964

 

DC FieldValueLanguage
dc.contributor.authorChen, Cheng-
dc.contributor.authorDeng, Min-
dc.contributor.authorZhong, Yuan-
dc.contributor.authorCai, Jinyue-
dc.contributor.authorChan, Karen Kar Wun-
dc.contributor.authorDou, Qi-
dc.contributor.authorChong, Kelvin Kam Lung-
dc.contributor.authorHeng, Pheng Ann-
dc.contributor.authorChu, Winnie Chiu Wing-
dc.date.accessioned2025-09-16T00:30:38Z-
dc.date.available2025-09-16T00:30:38Z-
dc.date.issued2025-02-25-
dc.identifier.citationIEEE Journal of Biomedical and Health Informatics, 2025, v. 29, n. 6, p. 4161-4172-
dc.identifier.issn2168-2194-
dc.identifier.urihttp://hdl.handle.net/10722/360808-
dc.description.abstractThyroid-associated orbitopathy (TAO) is a prevalent inflammatory autoimmune disorder, leading to orbital disfigurement and visual disability. Automatic comprehensive segmentation tailored for quantitative multi-modal MRI assessment of TAO holds enormous promise but is still lacking. In this paper, we propose a novel method, named cross-modal attentive self-training (CMAST), for the multi-organ segmentation in TAO using partially labeled and unaligned multi-modal MRI data. Our method first introduces a dedicatedly designed cross-modal pseudo label self-training scheme, which leverages self-training to refine the initial pseudo labels generated by cross-modal registration, so as to complete the label sets for comprehensive segmentation. With the obtained pseudo labels, we further devise a learnable attentive fusion module to aggregate multi-modal knowledge based on learned cross-modal feature attention, which relaxes the requirement of pixel-wise alignment across modalities. A prototypical contrastive learning loss is further incorporated to facilitate cross-modal feature alignment. We evaluate our method on a large clinical TAO cohort with 100 cases of multi-modal orbital MRI. The experimental results demonstrate the promising performance of our method in achieving comprehensive segmentation of TAO-affected organs on both T1 and T1c modalities, outperforming previous methods by a large margin. Code will be released upon acceptance.-
dc.languageeng-
dc.publisherIEEE-
dc.relation.ispartofIEEE Journal of Biomedical and Health Informatics-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectMulti-modal segmentation-
dc.subjectpartial labels-
dc.subjectthyroid-associated orbitopathy-
dc.titleMulti-organ Segmentation from Partially Labeled and Unaligned Multi-modal MRI in Thyroid-associated Orbitopathy-
dc.typeArticle-
dc.identifier.doi10.1109/JBHI.2025.3545138-
dc.identifier.scopuseid_2-s2.0-85218931825-
dc.identifier.volume29-
dc.identifier.issue6-
dc.identifier.spage4161-
dc.identifier.epage4172-
dc.identifier.eissn2168-2208-
dc.identifier.issnl2168-2194-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats