File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1016/j.acha.2025.101795
- Scopus: eid_2-s2.0-105010699042
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Minibatch and local SGD: Algorithmic stability and linear speedup in generalization
| Title | Minibatch and local SGD: Algorithmic stability and linear speedup in generalization |
|---|---|
| Authors | |
| Keywords | Algorithmic stability Generalization analysis Learning theory Stochastic gradient descent |
| Issue Date | 17-Jul-2025 |
| Publisher | Elsevier |
| Citation | Applied and Computational Harmonic Analysis, 2025, v. 79 How to Cite? |
| Abstract | The increasing scale of data propels the popularity of leveraging parallelism to speed up the optimization. Minibatch stochastic gradient descent (minibatch SGD) and local SGD are two popular methods for parallel optimization. The existing theoretical studies show a linear speedup of these methods with respect to the number of machines, which, however, is measured by optimization errors in a multi-pass setting. As a comparison, the stability and generalization of these methods are much less studied. In this paper, we study the stability and generalization analysis of minibatch and local SGD to understand their learnability by introducing an expectation-variance decomposition. We incorporate training errors into the stability analysis, which shows how small training errors help generalization for overparameterized models. We show minibatch and local SGD achieve a linear speedup to attain the optimal risk bounds. |
| Persistent Identifier | http://hdl.handle.net/10722/366464 |
| ISSN | 2023 Impact Factor: 2.6 2023 SCImago Journal Rankings: 2.231 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Lei, Yunwen | - |
| dc.contributor.author | Sun, Tao | - |
| dc.contributor.author | Liu, Mingrui | - |
| dc.date.accessioned | 2025-11-25T04:19:33Z | - |
| dc.date.available | 2025-11-25T04:19:33Z | - |
| dc.date.issued | 2025-07-17 | - |
| dc.identifier.citation | Applied and Computational Harmonic Analysis, 2025, v. 79 | - |
| dc.identifier.issn | 1063-5203 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/366464 | - |
| dc.description.abstract | The increasing scale of data propels the popularity of leveraging parallelism to speed up the optimization. Minibatch stochastic gradient descent (minibatch SGD) and local SGD are two popular methods for parallel optimization. The existing theoretical studies show a linear speedup of these methods with respect to the number of machines, which, however, is measured by optimization errors in a multi-pass setting. As a comparison, the stability and generalization of these methods are much less studied. In this paper, we study the stability and generalization analysis of minibatch and local SGD to understand their learnability by introducing an expectation-variance decomposition. We incorporate training errors into the stability analysis, which shows how small training errors help generalization for overparameterized models. We show minibatch and local SGD achieve a linear speedup to attain the optimal risk bounds. | - |
| dc.language | eng | - |
| dc.publisher | Elsevier | - |
| dc.relation.ispartof | Applied and Computational Harmonic Analysis | - |
| dc.subject | Algorithmic stability | - |
| dc.subject | Generalization analysis | - |
| dc.subject | Learning theory | - |
| dc.subject | Stochastic gradient descent | - |
| dc.title | Minibatch and local SGD: Algorithmic stability and linear speedup in generalization | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1016/j.acha.2025.101795 | - |
| dc.identifier.scopus | eid_2-s2.0-105010699042 | - |
| dc.identifier.volume | 79 | - |
| dc.identifier.eissn | 1096-603X | - |
| dc.identifier.issnl | 1063-5203 | - |
