File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TMI.2022.3233574
- Scopus: eid_2-s2.0-85147205950
- WOS: WOS:001022138900003
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Label-Efficient Self-Supervised Federated Learning for Tackling Data Heterogeneity in Medical Imaging
Title | Label-Efficient Self-Supervised Federated Learning for Tackling Data Heterogeneity in Medical Imaging |
---|---|
Authors | |
Keywords | Biomedical imaging Data Efficiency Data models Distributed databases Federated Learning Self-supervised Learning Self-supervised learning Task analysis Training Transformers Vision Transformers |
Issue Date | 2022 |
Citation | IEEE Transactions on Medical Imaging, 2022 How to Cite? |
Abstract | The collection and curation of large-scale medical datasets from multiple institutions is essential for training accurate deep learning models, but privacy concerns often hinder data sharing. Federated learning (FL) is a promising solution that enables privacy-preserving collaborative learning among different institutions, but it generally suffers from performance deterioration due to heterogeneous data distributions and a lack of quality labeled data. In this paper, we present a robust and label-efficient self-supervised FL framework for medical image analysis. Our method introduces a novel Transformer-based self-supervised pre-training paradigm that pre-trains models directly on decentralized target task datasets using masked image modeling, to facilitate more robust representation learning on heterogeneous data and effective knowledge transfer to downstream models. Extensive empirical results on simulated and real-world medical imaging non-IID federated datasets show that masked image modeling with Transformers significantly improves the robustness of models against various degrees of data heterogeneity. Notably, under severe data heterogeneity, our method, without relying on any additional pre-training data, achieves an improvement of 5.06%, 1.53% and 4.58% in test accuracy on retinal, dermatology and chest X-ray classification compared to the supervised baseline with ImageNet pre-training. In addition, we show that our federated self-supervised pre-training methods yield models that generalize better to out-of-distribution data and perform more effectively when fine-tuning with limited labeled data, compared to existing FL algorithms. The code is available at https://github.com/rui-yan/SSL-FL. |
Persistent Identifier | http://hdl.handle.net/10722/325597 |
ISSN | 2023 Impact Factor: 8.9 2023 SCImago Journal Rankings: 3.703 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yan, Rui | - |
dc.contributor.author | Qu, Liangqiong | - |
dc.contributor.author | Wei, Qingyue | - |
dc.contributor.author | Huang, Shih Cheng | - |
dc.contributor.author | Shen, Liyue | - |
dc.contributor.author | Rubin, Daniel | - |
dc.contributor.author | Xing, Lei | - |
dc.contributor.author | Zhou, Yuyin | - |
dc.date.accessioned | 2023-02-27T07:34:39Z | - |
dc.date.available | 2023-02-27T07:34:39Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | IEEE Transactions on Medical Imaging, 2022 | - |
dc.identifier.issn | 0278-0062 | - |
dc.identifier.uri | http://hdl.handle.net/10722/325597 | - |
dc.description.abstract | The collection and curation of large-scale medical datasets from multiple institutions is essential for training accurate deep learning models, but privacy concerns often hinder data sharing. Federated learning (FL) is a promising solution that enables privacy-preserving collaborative learning among different institutions, but it generally suffers from performance deterioration due to heterogeneous data distributions and a lack of quality labeled data. In this paper, we present a robust and label-efficient self-supervised FL framework for medical image analysis. Our method introduces a novel Transformer-based self-supervised pre-training paradigm that pre-trains models directly on decentralized target task datasets using masked image modeling, to facilitate more robust representation learning on heterogeneous data and effective knowledge transfer to downstream models. Extensive empirical results on simulated and real-world medical imaging non-IID federated datasets show that masked image modeling with Transformers significantly improves the robustness of models against various degrees of data heterogeneity. Notably, under severe data heterogeneity, our method, without relying on any additional pre-training data, achieves an improvement of 5.06%, 1.53% and 4.58% in test accuracy on retinal, dermatology and chest X-ray classification compared to the supervised baseline with ImageNet pre-training. In addition, we show that our federated self-supervised pre-training methods yield models that generalize better to out-of-distribution data and perform more effectively when fine-tuning with limited labeled data, compared to existing FL algorithms. The code is available at https://github.com/rui-yan/SSL-FL. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Medical Imaging | - |
dc.subject | Biomedical imaging | - |
dc.subject | Data Efficiency | - |
dc.subject | Data models | - |
dc.subject | Distributed databases | - |
dc.subject | Federated Learning | - |
dc.subject | Self-supervised Learning | - |
dc.subject | Self-supervised learning | - |
dc.subject | Task analysis | - |
dc.subject | Training | - |
dc.subject | Transformers | - |
dc.subject | Vision Transformers | - |
dc.title | Label-Efficient Self-Supervised Federated Learning for Tackling Data Heterogeneity in Medical Imaging | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TMI.2022.3233574 | - |
dc.identifier.scopus | eid_2-s2.0-85147205950 | - |
dc.identifier.eissn | 1558-254X | - |
dc.identifier.isi | WOS:001022138900003 | - |