File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Optimization and Learning With Randomly Compressed Gradient Updates

TitleOptimization and Learning With Randomly Compressed Gradient Updates
Authors
Issue Date2023
Citation
Neural Computation, 2023, v. 35, n. 7, p. 1234-1287 How to Cite?
AbstractGradient descent methods are simple and efficient optimization algorithms with widespread applications. To handle high-dimensional prob-lems, we study compressed stochastic gradient descent (SGD) with low-dimensional gradient updates. We provide a detailed analysis in terms of both optimization rates and generalization rates. To this end, we develop uniform stability bounds for CompSGD for both smooth and nonsmooth problems, based on which we develop almost optimal population risk bounds. Then we extend our analysis to two variants of SGD: batch and mini-batch gradient descent. Furthermore, we show that these variants achieve almost optimal rates compared to their high-dimensional gradient setting. Thus, our results provide a way to reduce the dimension of gradient updates without affecting the convergence rate in the generalization analysis. Moreover, we show that the same result also holds in the differentially private setting, which allows us to reduce the dimension of added noise with “almost free” cost.
Persistent Identifierhttp://hdl.handle.net/10722/329976
ISSN
2023 Impact Factor: 2.7
2023 SCImago Journal Rankings: 0.948
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorHuang, Zhanliang-
dc.contributor.authorLei, Yunwen-
dc.contributor.authorKabán, Ata-
dc.date.accessioned2023-08-09T03:36:55Z-
dc.date.available2023-08-09T03:36:55Z-
dc.date.issued2023-
dc.identifier.citationNeural Computation, 2023, v. 35, n. 7, p. 1234-1287-
dc.identifier.issn0899-7667-
dc.identifier.urihttp://hdl.handle.net/10722/329976-
dc.description.abstractGradient descent methods are simple and efficient optimization algorithms with widespread applications. To handle high-dimensional prob-lems, we study compressed stochastic gradient descent (SGD) with low-dimensional gradient updates. We provide a detailed analysis in terms of both optimization rates and generalization rates. To this end, we develop uniform stability bounds for CompSGD for both smooth and nonsmooth problems, based on which we develop almost optimal population risk bounds. Then we extend our analysis to two variants of SGD: batch and mini-batch gradient descent. Furthermore, we show that these variants achieve almost optimal rates compared to their high-dimensional gradient setting. Thus, our results provide a way to reduce the dimension of gradient updates without affecting the convergence rate in the generalization analysis. Moreover, we show that the same result also holds in the differentially private setting, which allows us to reduce the dimension of added noise with “almost free” cost.-
dc.languageeng-
dc.relation.ispartofNeural Computation-
dc.titleOptimization and Learning With Randomly Compressed Gradient Updates-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1162/neco_a_01588-
dc.identifier.pmid37187168-
dc.identifier.scopuseid_2-s2.0-85161145576-
dc.identifier.volume35-
dc.identifier.issue7-
dc.identifier.spage1234-
dc.identifier.epage1287-
dc.identifier.eissn1530-888X-
dc.identifier.isiWOS:001125378300004-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats