File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Local to global learning: Gradually adding classes for training deep neural networks

TitleLocal to global learning: Gradually adding classes for training deep neural networks
Authors
KeywordsComputer Vision Theory
Deep Learning
Representation Learning
Statistical Learning
Issue Date2019
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019, v. 2019-June, p. 4743-4751 How to Cite?
AbstractWe propose a new learning paradigm, Local to Global Learning (LGL), for Deep Neural Networks (DNNs) to improve the performance of classification problems. The core of LGL is to learn a DNN model from fewer categories (local) to more categories (global) gradually within the entire training set. LGL is most related to the Self-Paced Learning (SPL) algorithm but its formulation is different from SPL. SPL trains its data from simple to complex, while LGL from local to global. In this paper, we incorporate the idea of LGL into the learning objective of DNNs and explain why LGL works better from an information-theoretic perspective. Experiments on the toy data, CIFAR-10, CIFAR-100, and ImageNet dataset show that LGL outperforms the baseline and SPL-based algorithms.
Persistent Identifierhttp://hdl.handle.net/10722/345108
ISSN
2023 SCImago Journal Rankings: 10.331

 

DC FieldValueLanguage
dc.contributor.authorCheng, Hao-
dc.contributor.authorLian, Dongze-
dc.contributor.authorDeng, Bowen-
dc.contributor.authorGao, Shenghua-
dc.contributor.authorTan, Tao-
dc.contributor.authorGeng, Yanlin-
dc.date.accessioned2024-08-15T09:25:18Z-
dc.date.available2024-08-15T09:25:18Z-
dc.date.issued2019-
dc.identifier.citationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019, v. 2019-June, p. 4743-4751-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10722/345108-
dc.description.abstractWe propose a new learning paradigm, Local to Global Learning (LGL), for Deep Neural Networks (DNNs) to improve the performance of classification problems. The core of LGL is to learn a DNN model from fewer categories (local) to more categories (global) gradually within the entire training set. LGL is most related to the Self-Paced Learning (SPL) algorithm but its formulation is different from SPL. SPL trains its data from simple to complex, while LGL from local to global. In this paper, we incorporate the idea of LGL into the learning objective of DNNs and explain why LGL works better from an information-theoretic perspective. Experiments on the toy data, CIFAR-10, CIFAR-100, and ImageNet dataset show that LGL outperforms the baseline and SPL-based algorithms.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-
dc.subjectComputer Vision Theory-
dc.subjectDeep Learning-
dc.subjectRepresentation Learning-
dc.subjectStatistical Learning-
dc.titleLocal to global learning: Gradually adding classes for training deep neural networks-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/CVPR.2019.00488-
dc.identifier.scopuseid_2-s2.0-85078806120-
dc.identifier.volume2019-June-
dc.identifier.spage4743-
dc.identifier.epage4751-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats