File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Stochastic composite mirror descent: Optimal bounds with high probabilities
Title | Stochastic composite mirror descent: Optimal bounds with high probabilities |
---|---|
Authors | |
Issue Date | 2018 |
Citation | Advances in Neural Information Processing Systems, 2018, v. 2018-December, p. 1519-1529 How to Cite? |
Abstract | We study stochastic composite mirror descent, a class of scalable algorithms able to exploit the geometry and composite structure of a problem. We consider both convex and strongly convex objectives with non-smooth loss functions, for each of which we establish high-probability convergence rates optimal up to a logarithmic factor. We apply the derived computational error bounds to study the generalization performance of multi-pass stochastic gradient descent (SGD) in a non-parametric setting. Our high-probability generalization bounds enjoy a loga-rithmical dependency on the number of passes provided that the step size sequence is square-summable, which improves the existing bounds in expectation with a polynomial dependency and therefore gives a strong justification on the ability of multi-pass SGD to overcome overfitting. Our analysis removes boundedness assumptions on subgradients often imposed in the literature. Numerical results are reported to support our theoretical findings. |
Persistent Identifier | http://hdl.handle.net/10722/329561 |
ISSN | 2020 SCImago Journal Rankings: 1.399 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lei, Yunwen | - |
dc.contributor.author | Tang, Ke | - |
dc.date.accessioned | 2023-08-09T03:33:41Z | - |
dc.date.available | 2023-08-09T03:33:41Z | - |
dc.date.issued | 2018 | - |
dc.identifier.citation | Advances in Neural Information Processing Systems, 2018, v. 2018-December, p. 1519-1529 | - |
dc.identifier.issn | 1049-5258 | - |
dc.identifier.uri | http://hdl.handle.net/10722/329561 | - |
dc.description.abstract | We study stochastic composite mirror descent, a class of scalable algorithms able to exploit the geometry and composite structure of a problem. We consider both convex and strongly convex objectives with non-smooth loss functions, for each of which we establish high-probability convergence rates optimal up to a logarithmic factor. We apply the derived computational error bounds to study the generalization performance of multi-pass stochastic gradient descent (SGD) in a non-parametric setting. Our high-probability generalization bounds enjoy a loga-rithmical dependency on the number of passes provided that the step size sequence is square-summable, which improves the existing bounds in expectation with a polynomial dependency and therefore gives a strong justification on the ability of multi-pass SGD to overcome overfitting. Our analysis removes boundedness assumptions on subgradients often imposed in the literature. Numerical results are reported to support our theoretical findings. | - |
dc.language | eng | - |
dc.relation.ispartof | Advances in Neural Information Processing Systems | - |
dc.title | Stochastic composite mirror descent: Optimal bounds with high probabilities | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.scopus | eid_2-s2.0-85064828146 | - |
dc.identifier.volume | 2018-December | - |
dc.identifier.spage | 1519 | - |
dc.identifier.epage | 1529 | - |