File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: On the Optimal Linear Convergence Rate of a Generalized Proximal Point Algorithm

TitleOn the Optimal Linear Convergence Rate of a Generalized Proximal Point Algorithm
Authors
KeywordsLinear convergence rate
Convex programming
Augmented Lagrangian method
Proximal point algorithm
Alternating direction method of multipliers
Issue Date2017
Citation
Journal of Scientific Computing, 2017, p. 1-25 How to Cite?
Abstract© 2017 Springer Science+Business Media, LLC The proximal point algorithm (PPA) has been well studied in the literature. In particular, its linear convergence rate has been studied by Rockafellar in 1976 under certain condition. We consider a generalized PPA in the generic setting of finding a zero point of a maximal monotone operator, and show that the condition proposed by Rockafellar can also sufficiently ensure the linear convergence rate for this generalized PPA. Indeed we show that these linear convergence rates are optimal. Both the exact and inexact versions of this generalized PPA are discussed. The motivation of considering this generalized PPA is that it includes as special cases the relaxed versions of some splitting methods that are originated from PPA. Thus, linear convergence results of this generalized PPA can be used to better understand the convergence of some widely used algorithms in the literature. We focus on the particular convex minimization context and specify Rockafellar’s condition to see how to ensure the linear convergence rate for some efficient numerical schemes, including the classical augmented Lagrangian method proposed by Hensen and Powell in 1969 and its relaxed version, the original alternating direction method of multipliers (ADMM) by Glowinski and Marrocco in 1975 and its relaxed version (i.e., the generalized ADMM by Eckstein and Bertsekas in 1992). Some refined conditions weaker than existing ones are proposed in these particular contexts.
Persistent Identifierhttp://hdl.handle.net/10722/251225
ISSN
2023 Impact Factor: 2.8
2023 SCImago Journal Rankings: 1.248
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorTao, Min-
dc.contributor.authorYuan, Xiaoming-
dc.date.accessioned2018-02-01T01:54:57Z-
dc.date.available2018-02-01T01:54:57Z-
dc.date.issued2017-
dc.identifier.citationJournal of Scientific Computing, 2017, p. 1-25-
dc.identifier.issn0885-7474-
dc.identifier.urihttp://hdl.handle.net/10722/251225-
dc.description.abstract© 2017 Springer Science+Business Media, LLC The proximal point algorithm (PPA) has been well studied in the literature. In particular, its linear convergence rate has been studied by Rockafellar in 1976 under certain condition. We consider a generalized PPA in the generic setting of finding a zero point of a maximal monotone operator, and show that the condition proposed by Rockafellar can also sufficiently ensure the linear convergence rate for this generalized PPA. Indeed we show that these linear convergence rates are optimal. Both the exact and inexact versions of this generalized PPA are discussed. The motivation of considering this generalized PPA is that it includes as special cases the relaxed versions of some splitting methods that are originated from PPA. Thus, linear convergence results of this generalized PPA can be used to better understand the convergence of some widely used algorithms in the literature. We focus on the particular convex minimization context and specify Rockafellar’s condition to see how to ensure the linear convergence rate for some efficient numerical schemes, including the classical augmented Lagrangian method proposed by Hensen and Powell in 1969 and its relaxed version, the original alternating direction method of multipliers (ADMM) by Glowinski and Marrocco in 1975 and its relaxed version (i.e., the generalized ADMM by Eckstein and Bertsekas in 1992). Some refined conditions weaker than existing ones are proposed in these particular contexts.-
dc.languageeng-
dc.relation.ispartofJournal of Scientific Computing-
dc.subjectLinear convergence rate-
dc.subjectConvex programming-
dc.subjectAugmented Lagrangian method-
dc.subjectProximal point algorithm-
dc.subjectAlternating direction method of multipliers-
dc.titleOn the Optimal Linear Convergence Rate of a Generalized Proximal Point Algorithm-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/s10915-017-0477-9-
dc.identifier.scopuseid_2-s2.0-85023203205-
dc.identifier.spage1-
dc.identifier.epage25-
dc.identifier.isiWOS:000424676600011-
dc.identifier.issnl0885-7474-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats