File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: A family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo–Tseng error bound property

TitleA family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo–Tseng error bound property
Authors
KeywordsConvex composite minimization
Error bound
Proximal Newton method
Sequential quadratic approximation
Superlinear convergence
Issue Date2019
Citation
Mathematical Programming, 2019, v. 174, n. 1-2, p. 327-358 How to Cite?
AbstractWe propose a new family of inexact sequential quadratic approximation (SQA) methods, which we call the inexact regularized proximal Newton (IRPN) method, for minimizing the sum of two closed proper convex functions, one of which is smooth and the other is possibly non-smooth. Our proposed method features strong convergence guarantees even when applied to problems with degenerate solutions while allowing the inner minimization to be solved inexactly. Specifically, we prove that when the problem possesses the so-called Luo–Tseng error bound (EB) property, IRPN converges globally to an optimal solution, and the local convergence rate of the sequence of iterates generated by IRPN is linear, superlinear, or even quadratic, depending on the choice of parameters of the algorithm. Prior to this work, such EB property has been extensively used to establish the linear convergence of various first-order methods. However, to the best of our knowledge, this work is the first to use the Luo–Tseng EB property to establish the superlinear convergence of SQA-type methods for non-smooth convex minimization. As a consequence of our result, IRPN is capable of solving regularized regression or classification problems under the high-dimensional setting with provable convergence guarantees. We compare our proposed IRPN with several empirically efficient algorithms by applying them to the ℓ1-regularized logistic regression problem. Experiment results show the competitiveness of our proposed method.
Persistent Identifierhttp://hdl.handle.net/10722/313623
ISSN
2021 Impact Factor: 3.060
2020 SCImago Journal Rankings: 2.358
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorYue, Man Chung-
dc.contributor.authorZhou, Zirui-
dc.contributor.authorSo, Anthony Man Cho-
dc.date.accessioned2022-06-23T01:18:47Z-
dc.date.available2022-06-23T01:18:47Z-
dc.date.issued2019-
dc.identifier.citationMathematical Programming, 2019, v. 174, n. 1-2, p. 327-358-
dc.identifier.issn0025-5610-
dc.identifier.urihttp://hdl.handle.net/10722/313623-
dc.description.abstractWe propose a new family of inexact sequential quadratic approximation (SQA) methods, which we call the inexact regularized proximal Newton (IRPN) method, for minimizing the sum of two closed proper convex functions, one of which is smooth and the other is possibly non-smooth. Our proposed method features strong convergence guarantees even when applied to problems with degenerate solutions while allowing the inner minimization to be solved inexactly. Specifically, we prove that when the problem possesses the so-called Luo–Tseng error bound (EB) property, IRPN converges globally to an optimal solution, and the local convergence rate of the sequence of iterates generated by IRPN is linear, superlinear, or even quadratic, depending on the choice of parameters of the algorithm. Prior to this work, such EB property has been extensively used to establish the linear convergence of various first-order methods. However, to the best of our knowledge, this work is the first to use the Luo–Tseng EB property to establish the superlinear convergence of SQA-type methods for non-smooth convex minimization. As a consequence of our result, IRPN is capable of solving regularized regression or classification problems under the high-dimensional setting with provable convergence guarantees. We compare our proposed IRPN with several empirically efficient algorithms by applying them to the ℓ1-regularized logistic regression problem. Experiment results show the competitiveness of our proposed method.-
dc.languageeng-
dc.relation.ispartofMathematical Programming-
dc.subjectConvex composite minimization-
dc.subjectError bound-
dc.subjectProximal Newton method-
dc.subjectSequential quadratic approximation-
dc.subjectSuperlinear convergence-
dc.titleA family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo–Tseng error bound property-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/s10107-018-1280-6-
dc.identifier.scopuseid_2-s2.0-85063953410-
dc.identifier.volume174-
dc.identifier.issue1-2-
dc.identifier.spage327-
dc.identifier.epage358-
dc.identifier.eissn1436-4646-
dc.identifier.isiWOS:000463715600014-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats