File Download
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: The Benefits of Implicit Regularization from SGD in Least Squares Problems
Title | The Benefits of Implicit Regularization from SGD in Least Squares Problems |
---|---|
Authors | |
Issue Date | 2021 |
Citation | Advances in Neural Information Processing Systems, 2021, v. 7, p. 5456-5468 How to Cite? |
Abstract | Stochastic gradient descent (SGD) exhibits strong algorithmic regularization effects in practice, which has been hypothesized to play an important role in the generalization of modern machine learning approaches. In this work, we seek to understand these issues in the simpler setting of linear regression (including both underparameterized and overparameterized regimes), where our goal is to make sharp instance-based comparisons of the implicit regularization afforded by (unregularized) average SGD with the explicit regularization of ridge regression. For a broad class of least squares problem instances (that are natural in high-dimensional settings), we show: (1) for every problem instance and for every ridge parameter, (unregularized) SGD, when provided with logarithmically more samples than that provided to the ridge algorithm, generalizes no worse than the ridge solution (provided SGD uses a tuned constant stepsize); (2) conversely, there exist instances (in this wide problem class) where optimally-tuned ridge regression requires quadratically more samples than SGD in order to have the same generalization performance. Taken together, our results show that, up to the logarithmic factors, the generalization performance of SGD is always no worse than that of ridge regression in a wide range of overparameterized problems, and, in fact, could be much better for some problem instances. More generally, our results show how algorithmic regularization has important consequences even in simpler (overparameterized) convex settings. |
Persistent Identifier | http://hdl.handle.net/10722/316659 |
ISSN | 2020 SCImago Journal Rankings: 1.399 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zou, Difan | - |
dc.contributor.author | Wu, Jingfeng | - |
dc.contributor.author | Braverman, Vladimir | - |
dc.contributor.author | Gu, Quanquan | - |
dc.contributor.author | Foster, Dean P. | - |
dc.contributor.author | Kakade, Sham M. | - |
dc.date.accessioned | 2022-09-14T11:41:00Z | - |
dc.date.available | 2022-09-14T11:41:00Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Advances in Neural Information Processing Systems, 2021, v. 7, p. 5456-5468 | - |
dc.identifier.issn | 1049-5258 | - |
dc.identifier.uri | http://hdl.handle.net/10722/316659 | - |
dc.description.abstract | Stochastic gradient descent (SGD) exhibits strong algorithmic regularization effects in practice, which has been hypothesized to play an important role in the generalization of modern machine learning approaches. In this work, we seek to understand these issues in the simpler setting of linear regression (including both underparameterized and overparameterized regimes), where our goal is to make sharp instance-based comparisons of the implicit regularization afforded by (unregularized) average SGD with the explicit regularization of ridge regression. For a broad class of least squares problem instances (that are natural in high-dimensional settings), we show: (1) for every problem instance and for every ridge parameter, (unregularized) SGD, when provided with logarithmically more samples than that provided to the ridge algorithm, generalizes no worse than the ridge solution (provided SGD uses a tuned constant stepsize); (2) conversely, there exist instances (in this wide problem class) where optimally-tuned ridge regression requires quadratically more samples than SGD in order to have the same generalization performance. Taken together, our results show that, up to the logarithmic factors, the generalization performance of SGD is always no worse than that of ridge regression in a wide range of overparameterized problems, and, in fact, could be much better for some problem instances. More generally, our results show how algorithmic regularization has important consequences even in simpler (overparameterized) convex settings. | - |
dc.language | eng | - |
dc.relation.ispartof | Advances in Neural Information Processing Systems | - |
dc.title | The Benefits of Implicit Regularization from SGD in Least Squares Problems | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_OA_fulltext | - |
dc.identifier.scopus | eid_2-s2.0-85131726722 | - |
dc.identifier.volume | 7 | - |
dc.identifier.spage | 5456 | - |
dc.identifier.epage | 5468 | - |