File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Tight sample complexity of learning one-hidden-layer convolutional neural networks

TitleTight sample complexity of learning one-hidden-layer convolutional neural networks
Authors
Issue Date2019
Citation
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada, 8-14 Decemeber 2019. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019), 2020 How to Cite?
AbstractWe study the sample complexity of learning one-hidden-layer convolutional neural networks (CNNs) with non-overlapping filters. We propose a novel algorithm called approximate gradient descent for training CNNs, and show that, with high probability, the proposed algorithm with random initialization grants a linear convergence to the ground-truth parameters up to statistical precision. Compared with existing work, our result applies to general non-trivial, monotonic and Lipschitz continuous activation functions including ReLU, Leaky ReLU, Sigmod and Softplus etc. Moreover, our sample complexity beats existing results in the dependency of the number of hidden nodes and filter size. In fact, our result matches the information-theoretic lower bound for learning one-hidden-layer CNNs with linear activation functions, suggesting that our sample complexity is tight. Our theoretical analysis is backed up by numerical experiments.
Persistent Identifierhttp://hdl.handle.net/10722/303694
ISSN
2020 SCImago Journal Rankings: 1.399
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorCao, Yuan-
dc.contributor.authorGu, Quanquan-
dc.date.accessioned2021-09-15T08:25:50Z-
dc.date.available2021-09-15T08:25:50Z-
dc.date.issued2019-
dc.identifier.citation33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada, 8-14 Decemeber 2019. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019), 2020-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/303694-
dc.description.abstractWe study the sample complexity of learning one-hidden-layer convolutional neural networks (CNNs) with non-overlapping filters. We propose a novel algorithm called approximate gradient descent for training CNNs, and show that, with high probability, the proposed algorithm with random initialization grants a linear convergence to the ground-truth parameters up to statistical precision. Compared with existing work, our result applies to general non-trivial, monotonic and Lipschitz continuous activation functions including ReLU, Leaky ReLU, Sigmod and Softplus etc. Moreover, our sample complexity beats existing results in the dependency of the number of hidden nodes and filter size. In fact, our result matches the information-theoretic lower bound for learning one-hidden-layer CNNs with linear activation functions, suggesting that our sample complexity is tight. Our theoretical analysis is backed up by numerical experiments.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems 32 (NeurIPS 2019)-
dc.titleTight sample complexity of learning one-hidden-layer convolutional neural networks-
dc.typeConference_Paper-
dc.description.naturelink_to_OA_fulltext-
dc.identifier.scopuseid_2-s2.0-85090172205-
dc.identifier.isiWOS:000535866902026-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats