File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Generalization Guarantees of Gradient Descent for Shallow Neural Networks

TitleGeneralization Guarantees of Gradient Descent for Shallow Neural Networks
Authors
Issue Date21-Jan-2025
PublisherMassachusetts Institute of Technology Press
Citation
Neural Computation, 2025, v. 37, n. 2, p. 344-402 How to Cite?
Abstract

Significant progress has been made recently in understanding the generalization of neural networks (NNs) trained by gradient descent (GD) using the algorithmic stability approach. However, most of the existing research has focused on one-hidden-layer NNs and has not addressed the impact of different network scaling. Here, network scaling corresponds to the normalization of the layers. In this article, we greatly extend the previous work (Lei et al., 2022; Richards & Kuzborskij, 2021) by conducting a comprehensive stability and generalization analysis of GD for two-layer and three-layer NNs. For two-layer NNs, our results are established under general network scaling, relaxing previous conditions. In the case of three-layer NNs, our technical contribution lies in demonstrating its nearly co-coercive property by utilizing a novel induction strategy that thoroughly explores the effects of overparameterization. As a direct application of our general findings, we derive the excess risk rate of O(1/n) for GD in both two-layer and three-layer NNs. This sheds light on sufficient or necessary conditions for underparameterized and overparameterized NNs trained by GD to attain the desired risk rate of O(1/n). Moreover, we demonstrate that as the scaling factor increases or the network complexity decreases, less overparameterization is required for GD to achieve the desired error rates. Additionally, under a low-noise condition, we obtain a fast risk rate of O(1/n) for GD in both two-layer and three-layer NNs.


Persistent Identifierhttp://hdl.handle.net/10722/355113
ISSN
2023 Impact Factor: 2.7
2023 SCImago Journal Rankings: 0.948
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWang, Puyu-
dc.contributor.authorLei, Yunwen-
dc.contributor.authorWang, Di-
dc.contributor.authorYing, Yiming-
dc.contributor.authorZhou, Ding Xuan-
dc.date.accessioned2025-03-27T00:35:31Z-
dc.date.available2025-03-27T00:35:31Z-
dc.date.issued2025-01-21-
dc.identifier.citationNeural Computation, 2025, v. 37, n. 2, p. 344-402-
dc.identifier.issn0899-7667-
dc.identifier.urihttp://hdl.handle.net/10722/355113-
dc.description.abstract<p>Significant progress has been made recently in understanding the generalization of neural networks (NNs) trained by gradient descent (GD) using the algorithmic stability approach. However, most of the existing research has focused on one-hidden-layer NNs and has not addressed the impact of different network scaling. Here, network scaling corresponds to the normalization of the layers. In this article, we greatly extend the previous work (Lei et al., 2022; Richards & Kuzborskij, 2021) by conducting a comprehensive stability and generalization analysis of GD for two-layer and three-layer NNs. For two-layer NNs, our results are established under general network scaling, relaxing previous conditions. In the case of three-layer NNs, our technical contribution lies in demonstrating its nearly co-coercive property by utilizing a novel induction strategy that thoroughly explores the effects of overparameterization. As a direct application of our general findings, we derive the excess risk rate of O(1/n) for GD in both two-layer and three-layer NNs. This sheds light on sufficient or necessary conditions for underparameterized and overparameterized NNs trained by GD to attain the desired risk rate of O(1/n). Moreover, we demonstrate that as the scaling factor increases or the network complexity decreases, less overparameterization is required for GD to achieve the desired error rates. Additionally, under a low-noise condition, we obtain a fast risk rate of O(1/n) for GD in both two-layer and three-layer NNs.</p>-
dc.languageeng-
dc.publisherMassachusetts Institute of Technology Press-
dc.relation.ispartofNeural Computation-
dc.titleGeneralization Guarantees of Gradient Descent for Shallow Neural Networks-
dc.typeArticle-
dc.identifier.doi10.1162/neco_a_01725-
dc.identifier.pmid39556516-
dc.identifier.scopuseid_2-s2.0-85216908507-
dc.identifier.volume37-
dc.identifier.issue2-
dc.identifier.spage344-
dc.identifier.epage402-
dc.identifier.eissn1530-888X-
dc.identifier.isiWOS:001406038400004-
dc.identifier.issnl0899-7667-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats