File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TMI.2020.3008871
- Scopus: eid_2-s2.0-85097004086
- PMID: 32746140
- WOS: WOS:000595547500024
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Self-Supervised Feature Learning via Exploiting Multi-Modal Data for Retinal Disease Diagnosis
Title | Self-Supervised Feature Learning via Exploiting Multi-Modal Data for Retinal Disease Diagnosis |
---|---|
Authors | |
Keywords | self-supervised learning multi-modal data Retinal disease diagnosis |
Issue Date | 2020 |
Citation | IEEE Transactions on Medical Imaging, 2020, v. 39, n. 12, p. 4023-4033 How to Cite? |
Abstract | The automatic diagnosis of various retinal diseases from fundus images is important to support clinical decision-making. However, developing such automatic solutions is challenging due to the requirement of a large amount of human-annotated data. Recently, unsupervised/self-supervised feature learning techniques receive a lot of attention, as they do not need massive annotations. Most of the current self-supervised methods are analyzed with single imaging modality and there is no method currently utilize multi-modal images for better results. Considering that the diagnostics of various vitreoretinal diseases can greatly benefit from another imaging modality, e.g., FFA, this paper presents a novel self-supervised feature learning method by effectively exploiting multi-modal data for retinal disease diagnosis. To achieve this, we first synthesize the corresponding FFA modality and then formulate a patient feature-based softmax embedding objective. Our objective learns both modality-invariant features and patient-similarity features. Through this mechanism, the neural network captures the semantically shared information across different modalities and the apparent visual similarity between patients. We evaluate our method on two public benchmark datasets for retinal disease diagnosis. The experimental results demonstrate that our method clearly outperforms other self-supervised feature learning methods and is comparable to the supervised baseline. Our code is available at GitHub. |
Persistent Identifier | http://hdl.handle.net/10722/299481 |
ISSN | 2023 Impact Factor: 8.9 2023 SCImago Journal Rankings: 3.703 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Li, Xiaomeng | - |
dc.contributor.author | Jia, Mengyu | - |
dc.contributor.author | Islam, Md Tauhidul | - |
dc.contributor.author | Yu, Lequan | - |
dc.contributor.author | Xing, Lei | - |
dc.date.accessioned | 2021-05-21T03:34:30Z | - |
dc.date.available | 2021-05-21T03:34:30Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | IEEE Transactions on Medical Imaging, 2020, v. 39, n. 12, p. 4023-4033 | - |
dc.identifier.issn | 0278-0062 | - |
dc.identifier.uri | http://hdl.handle.net/10722/299481 | - |
dc.description.abstract | The automatic diagnosis of various retinal diseases from fundus images is important to support clinical decision-making. However, developing such automatic solutions is challenging due to the requirement of a large amount of human-annotated data. Recently, unsupervised/self-supervised feature learning techniques receive a lot of attention, as they do not need massive annotations. Most of the current self-supervised methods are analyzed with single imaging modality and there is no method currently utilize multi-modal images for better results. Considering that the diagnostics of various vitreoretinal diseases can greatly benefit from another imaging modality, e.g., FFA, this paper presents a novel self-supervised feature learning method by effectively exploiting multi-modal data for retinal disease diagnosis. To achieve this, we first synthesize the corresponding FFA modality and then formulate a patient feature-based softmax embedding objective. Our objective learns both modality-invariant features and patient-similarity features. Through this mechanism, the neural network captures the semantically shared information across different modalities and the apparent visual similarity between patients. We evaluate our method on two public benchmark datasets for retinal disease diagnosis. The experimental results demonstrate that our method clearly outperforms other self-supervised feature learning methods and is comparable to the supervised baseline. Our code is available at GitHub. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Medical Imaging | - |
dc.subject | self-supervised learning | - |
dc.subject | multi-modal data | - |
dc.subject | Retinal disease diagnosis | - |
dc.title | Self-Supervised Feature Learning via Exploiting Multi-Modal Data for Retinal Disease Diagnosis | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TMI.2020.3008871 | - |
dc.identifier.pmid | 32746140 | - |
dc.identifier.scopus | eid_2-s2.0-85097004086 | - |
dc.identifier.volume | 39 | - |
dc.identifier.issue | 12 | - |
dc.identifier.spage | 4023 | - |
dc.identifier.epage | 4033 | - |
dc.identifier.eissn | 1558-254X | - |
dc.identifier.isi | WOS:000595547500024 | - |