File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Deep Self-Learning from Noisy Labels

TitleDeep Self-Learning from Noisy Labels
Authors
Issue Date2019
PublisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000149
Citation
Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October - 2 November 2019, p. 5137-5146 How to Cite?
AbstractConvNets achieve good results when training from clean data, but learning from noisy labels significantly degrades performances and remains challenging. Unlike previous works constrained by many conditions, making them infeasible to real noisy cases, this work presents a novel deep self-learning framework to train a robust network on the real noisy datasets without extra supervision. The proposed approach has several appealing benefits. (1) Different from most existing work, it does not rely on any assumption on the distribution of the noisy labels, making it robust to real noises. (2) It does not need extra clean supervision or accessorial network to help training. (3) A self-learning framework is proposed to train the network in an iterative end-to-end manner, which is effective and efficient. Extensive experiments in challenging benchmarks such as Clothing1M and Food101-N show that our approach outperforms its counterparts in all empirical settings.
Persistent Identifierhttp://hdl.handle.net/10722/284153
ISSN

 

DC FieldValueLanguage
dc.contributor.authorHan, J-
dc.contributor.authorLuo, P-
dc.contributor.authorWang, X-
dc.date.accessioned2020-07-20T05:56:30Z-
dc.date.available2020-07-20T05:56:30Z-
dc.date.issued2019-
dc.identifier.citationProceedings of IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October - 2 November 2019, p. 5137-5146-
dc.identifier.issn1550-5499-
dc.identifier.urihttp://hdl.handle.net/10722/284153-
dc.description.abstractConvNets achieve good results when training from clean data, but learning from noisy labels significantly degrades performances and remains challenging. Unlike previous works constrained by many conditions, making them infeasible to real noisy cases, this work presents a novel deep self-learning framework to train a robust network on the real noisy datasets without extra supervision. The proposed approach has several appealing benefits. (1) Different from most existing work, it does not rely on any assumption on the distribution of the noisy labels, making it robust to real noises. (2) It does not need extra clean supervision or accessorial network to help training. (3) A self-learning framework is proposed to train the network in an iterative end-to-end manner, which is effective and efficient. Extensive experiments in challenging benchmarks such as Clothing1M and Food101-N show that our approach outperforms its counterparts in all empirical settings.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000149-
dc.relation.ispartofIEEE International Conference on Computer Vision (ICCV) Proceedings-
dc.rightsIEEE International Conference on Computer Vision (ICCV) Proceedings. Copyright © Institute of Electrical and Electronics Engineers.-
dc.rights©2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.titleDeep Self-Learning from Noisy Labels-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.description.naturepostprint-
dc.identifier.doi10.1109/ICCV.2019.00524-
dc.identifier.scopuseid_2-s2.0-85081915165-
dc.identifier.hkuros311012-
dc.identifier.spage5137-
dc.identifier.epage5146-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats