File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/s12532-021-00205-x
- Scopus: eid_2-s2.0-85110681514
- WOS: WOS:000673691400001
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization
Title | An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization |
---|---|
Authors | |
Keywords | Inexact augmented Lagrangian method Large scale optimization Randomized first-order method Explicit inner termination rule Relative smoothness condition |
Issue Date | 2021 |
Publisher | Springer. The Journal's web site is located at https://www.springer.com/journal/12532 |
Citation | Mathematical Programming Computation, 2021, v. 13 n. 3, p. 583-644 How to Cite? |
Abstract | We propose an inexact proximal augmented Lagrangian framework with explicit inner problem termination rule for composite convex optimization problems. We consider arbitrary linearly convergent inner solver including in particular stochastic algorithms, making the resulting framework more scalable facing the ever-increasing problem dimension. Each subproblem is solved inexactly with an explicit and self-adaptive stopping criterion, without requiring to set an a priori target accuracy. When the primal and dual domain are bounded, our method achieves O(1/ϵ√) and O(1/ϵ) complexity bound in terms of number of inner solver iterations, respectively for the strongly convex and non-strongly convex case. Without the boundedness assumption, only logarithm terms need to be added and the above two complexity bounds increase respectively to O~(1/√ϵ) and O~(1/ϵ), which hold both for obtaining ϵ-optimal and ϵ-KKT solution. Within the general framework that we propose, we also obtain O~(1/ϵ) and O~(1/ϵ2) complexity bounds under relative smoothness assumption on the differentiable component of the objective function. We show through theoretical analysis as well as numerical experiments the computational speedup possibly achieved by the use of randomized inner solvers for large-scale problems. |
Persistent Identifier | http://hdl.handle.net/10722/307877 |
ISSN | 2023 Impact Factor: 4.3 2023 SCImago Journal Rankings: 2.501 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Li, F | - |
dc.contributor.author | Qu, Z | - |
dc.date.accessioned | 2021-11-12T13:39:12Z | - |
dc.date.available | 2021-11-12T13:39:12Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Mathematical Programming Computation, 2021, v. 13 n. 3, p. 583-644 | - |
dc.identifier.issn | 1867-2949 | - |
dc.identifier.uri | http://hdl.handle.net/10722/307877 | - |
dc.description.abstract | We propose an inexact proximal augmented Lagrangian framework with explicit inner problem termination rule for composite convex optimization problems. We consider arbitrary linearly convergent inner solver including in particular stochastic algorithms, making the resulting framework more scalable facing the ever-increasing problem dimension. Each subproblem is solved inexactly with an explicit and self-adaptive stopping criterion, without requiring to set an a priori target accuracy. When the primal and dual domain are bounded, our method achieves O(1/ϵ√) and O(1/ϵ) complexity bound in terms of number of inner solver iterations, respectively for the strongly convex and non-strongly convex case. Without the boundedness assumption, only logarithm terms need to be added and the above two complexity bounds increase respectively to O~(1/√ϵ) and O~(1/ϵ), which hold both for obtaining ϵ-optimal and ϵ-KKT solution. Within the general framework that we propose, we also obtain O~(1/ϵ) and O~(1/ϵ2) complexity bounds under relative smoothness assumption on the differentiable component of the objective function. We show through theoretical analysis as well as numerical experiments the computational speedup possibly achieved by the use of randomized inner solvers for large-scale problems. | - |
dc.language | eng | - |
dc.publisher | Springer. The Journal's web site is located at https://www.springer.com/journal/12532 | - |
dc.relation.ispartof | Mathematical Programming Computation | - |
dc.subject | Inexact augmented Lagrangian method | - |
dc.subject | Large scale optimization | - |
dc.subject | Randomized first-order method | - |
dc.subject | Explicit inner termination rule | - |
dc.subject | Relative smoothness condition | - |
dc.title | An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization | - |
dc.type | Article | - |
dc.identifier.email | Qu, Z: zhengqu@hku.hk | - |
dc.identifier.authority | Qu, Z=rp02096 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1007/s12532-021-00205-x | - |
dc.identifier.scopus | eid_2-s2.0-85110681514 | - |
dc.identifier.hkuros | 329923 | - |
dc.identifier.volume | 13 | - |
dc.identifier.issue | 3 | - |
dc.identifier.spage | 583 | - |
dc.identifier.epage | 644 | - |
dc.identifier.isi | WOS:000673691400001 | - |
dc.publisher.place | Germany | - |