File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TMI.2019.2899910
- Scopus: eid_2-s2.0-85069862302
- PMID: 30794170
- WOS: WOS:000494433300001
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Patch-Based Output Space Adversarial Learning for Joint Optic Disc and Cup Segmentation
Title | Patch-Based Output Space Adversarial Learning for Joint Optic Disc and Cup Segmentation |
---|---|
Authors | |
Keywords | deep learning optic cup segmentation domain adaptation adversarial learning Optic disc segmentation |
Issue Date | 2019 |
Citation | IEEE Transactions on Medical Imaging, 2019, v. 38, n. 11, p. 2485-2495 How to Cite? |
Abstract | Glaucoma is a leading cause of irreversible blindness. Accurate segmentation of the optic disc (OD) and optic cup (OC) from fundus images is beneficial to glaucoma screening and diagnosis. Recently, convolutional neural networks demonstrate promising progress in the joint OD and OC segmentation. However, affected by the domain shift among different datasets, deep networks are severely hindered in generalizing across different scanners and institutions. In this paper, we present a novel patch-based output space adversarial learning framework (p OSAL) to jointly and robustly segment the OD and OC from different fundus image datasets. We first devise a lightweight and efficient segmentation network as a backbone. Considering the specific morphology of OD and OC, a novel morphology-aware segmentation loss is proposed to guide the network to generate accurate and smooth segmentation. Our p OSAL framework then exploits unsupervised domain adaptation to address the domain shift challenge by encouraging the segmentation in the target domain to be similar to the source ones. Since the whole-segmentation-based adversarial loss is insufficient to drive the network to capture segmentation details, we further design the p OSAL in a patch-based fashion to enable fine-grained discrimination on local segmentation details. We extensively evaluate our p OSAL framework and demonstrate its effectiveness in improving the segmentation performance on three public retinal fundus image datasets, i.e., Drishti-GS, RIM-ONE-r3, and REFUGE. Furthermore, our p OSAL framework achieved the first place in the OD and OC segmentation tasks in the MICCAI 2018 Retinal Fundus Glaucoma Challenge. |
Persistent Identifier | http://hdl.handle.net/10722/299595 |
ISSN | 2023 Impact Factor: 8.9 2023 SCImago Journal Rankings: 3.703 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wang, Shujun | - |
dc.contributor.author | Yu, Lequan | - |
dc.contributor.author | Yang, Xin | - |
dc.contributor.author | Fu, Chi Wing | - |
dc.contributor.author | Heng, Pheng Ann | - |
dc.date.accessioned | 2021-05-21T03:34:45Z | - |
dc.date.available | 2021-05-21T03:34:45Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | IEEE Transactions on Medical Imaging, 2019, v. 38, n. 11, p. 2485-2495 | - |
dc.identifier.issn | 0278-0062 | - |
dc.identifier.uri | http://hdl.handle.net/10722/299595 | - |
dc.description.abstract | Glaucoma is a leading cause of irreversible blindness. Accurate segmentation of the optic disc (OD) and optic cup (OC) from fundus images is beneficial to glaucoma screening and diagnosis. Recently, convolutional neural networks demonstrate promising progress in the joint OD and OC segmentation. However, affected by the domain shift among different datasets, deep networks are severely hindered in generalizing across different scanners and institutions. In this paper, we present a novel patch-based output space adversarial learning framework (p OSAL) to jointly and robustly segment the OD and OC from different fundus image datasets. We first devise a lightweight and efficient segmentation network as a backbone. Considering the specific morphology of OD and OC, a novel morphology-aware segmentation loss is proposed to guide the network to generate accurate and smooth segmentation. Our p OSAL framework then exploits unsupervised domain adaptation to address the domain shift challenge by encouraging the segmentation in the target domain to be similar to the source ones. Since the whole-segmentation-based adversarial loss is insufficient to drive the network to capture segmentation details, we further design the p OSAL in a patch-based fashion to enable fine-grained discrimination on local segmentation details. We extensively evaluate our p OSAL framework and demonstrate its effectiveness in improving the segmentation performance on three public retinal fundus image datasets, i.e., Drishti-GS, RIM-ONE-r3, and REFUGE. Furthermore, our p OSAL framework achieved the first place in the OD and OC segmentation tasks in the MICCAI 2018 Retinal Fundus Glaucoma Challenge. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Medical Imaging | - |
dc.subject | deep learning | - |
dc.subject | optic cup segmentation | - |
dc.subject | domain adaptation | - |
dc.subject | adversarial learning | - |
dc.subject | Optic disc segmentation | - |
dc.title | Patch-Based Output Space Adversarial Learning for Joint Optic Disc and Cup Segmentation | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TMI.2019.2899910 | - |
dc.identifier.pmid | 30794170 | - |
dc.identifier.scopus | eid_2-s2.0-85069862302 | - |
dc.identifier.volume | 38 | - |
dc.identifier.issue | 11 | - |
dc.identifier.spage | 2485 | - |
dc.identifier.epage | 2495 | - |
dc.identifier.eissn | 1558-254X | - |
dc.identifier.isi | WOS:000494433300001 | - |