File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks

TitleAutomated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks
Authors
Keywordsskin lesion analysis
very deep convolutional neural networks
residual learning
Automated melanoma recognition
fully convolutional neural networks
Issue Date2017
Citation
IEEE Transactions on Medical Imaging, 2017, v. 36, n. 4, p. 994-1004 How to Cite?
AbstractAutomated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition. To take full advantage of very deep networks, we propose a set of schemes to ensure effective training and learning under limited training data. First, we apply the residual learning to cope with the degradation and overfitting problems when a network goes deeper. This technique can ensure that our networks benefit from the performance gains achieved by increasing network depth. Then, we construct a fully convolutional residual network (FCRN) for accurate skin lesion segmentation, and further enhance its capability by incorporating a multi-scale contextual information integration scheme. Finally, we seamlessly integrate the proposed FCRN (for segmentation) and other very deep residual networks (for classification) to form a two-stage framework. This framework enables the classification network to extract more representative and specific features based on segmented results instead of the whole dermoscopy images, further alleviating the insufficiency of training data. The proposed framework is extensively evaluated on ISBI 2016 Skin Lesion Analysis Towards Melanoma Detection Challenge dataset. Experimental results demonstrate the significant performance gains of the proposed framework, ranking the first in classification and the second in segmentation among 25 teams and 28 teams, respectively. This study corroborates that very deep CNNs with effective training mechanisms can be employed to solve complicated medical image analysis tasks, even with limited training data.
Persistent Identifierhttp://hdl.handle.net/10722/299547
ISSN
2023 Impact Factor: 8.9
2023 SCImago Journal Rankings: 3.703
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorYu, Lequan-
dc.contributor.authorChen, Hao-
dc.contributor.authorDou, Qi-
dc.contributor.authorQin, Jing-
dc.contributor.authorHeng, Pheng Ann-
dc.date.accessioned2021-05-21T03:34:38Z-
dc.date.available2021-05-21T03:34:38Z-
dc.date.issued2017-
dc.identifier.citationIEEE Transactions on Medical Imaging, 2017, v. 36, n. 4, p. 994-1004-
dc.identifier.issn0278-0062-
dc.identifier.urihttp://hdl.handle.net/10722/299547-
dc.description.abstractAutomated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition. To take full advantage of very deep networks, we propose a set of schemes to ensure effective training and learning under limited training data. First, we apply the residual learning to cope with the degradation and overfitting problems when a network goes deeper. This technique can ensure that our networks benefit from the performance gains achieved by increasing network depth. Then, we construct a fully convolutional residual network (FCRN) for accurate skin lesion segmentation, and further enhance its capability by incorporating a multi-scale contextual information integration scheme. Finally, we seamlessly integrate the proposed FCRN (for segmentation) and other very deep residual networks (for classification) to form a two-stage framework. This framework enables the classification network to extract more representative and specific features based on segmented results instead of the whole dermoscopy images, further alleviating the insufficiency of training data. The proposed framework is extensively evaluated on ISBI 2016 Skin Lesion Analysis Towards Melanoma Detection Challenge dataset. Experimental results demonstrate the significant performance gains of the proposed framework, ranking the first in classification and the second in segmentation among 25 teams and 28 teams, respectively. This study corroborates that very deep CNNs with effective training mechanisms can be employed to solve complicated medical image analysis tasks, even with limited training data.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Medical Imaging-
dc.subjectskin lesion analysis-
dc.subjectvery deep convolutional neural networks-
dc.subjectresidual learning-
dc.subjectAutomated melanoma recognition-
dc.subjectfully convolutional neural networks-
dc.titleAutomated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TMI.2016.2642839-
dc.identifier.pmid28026754-
dc.identifier.scopuseid_2-s2.0-85018500457-
dc.identifier.volume36-
dc.identifier.issue4-
dc.identifier.spage994-
dc.identifier.epage1004-
dc.identifier.eissn1558-254X-
dc.identifier.isiWOS:000400868100012-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats