File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: An Efficient Augmented Lagrangian-Based Method for Linear Equality-Constrained Lasso

TitleAn Efficient Augmented Lagrangian-Based Method for Linear Equality-Constrained Lasso
Authors
Keywordsaugmented Lagrangian
constrained Lasso
semismooth Newton
superlinear convergence
Issue Date2020
Citation
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2020, v. 2020-May, p. 5760-5764 How to Cite?
AbstractVariable selection is one of the most important tasks in statistics and machine learning. To incorporate more prior information about the regression coefficients, various constrained Lasso models have been proposed in the literature. Compared with the classic (unconstrained) Lasso model, the algorithmic aspects of constrained Lasso models are much less explored. In this paper, we demonstrate how the recently developed semis-mooth Newton-based augmented Lagrangian framework can be extended to solve a linear equality-constrained Lasso model. A key technical challenge that is not present in prior works is the lack of strong convexity in our dual problem, which we overcome by adopting a regularization strategy. We show that under mild assumptions, our proposed method will converge superlinearly. Moreover, extensive numerical experiments on both synthetic and real-world data show that our method can be substantially faster than existing first-order methods while achieving a better solution accuracy.
Persistent Identifierhttp://hdl.handle.net/10722/313628
ISSN
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorDeng, Zengde-
dc.contributor.authorYue, Man Chung-
dc.contributor.authorSo, Anthony Man Cho-
dc.date.accessioned2022-06-23T01:18:48Z-
dc.date.available2022-06-23T01:18:48Z-
dc.date.issued2020-
dc.identifier.citationICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2020, v. 2020-May, p. 5760-5764-
dc.identifier.issn1520-6149-
dc.identifier.urihttp://hdl.handle.net/10722/313628-
dc.description.abstractVariable selection is one of the most important tasks in statistics and machine learning. To incorporate more prior information about the regression coefficients, various constrained Lasso models have been proposed in the literature. Compared with the classic (unconstrained) Lasso model, the algorithmic aspects of constrained Lasso models are much less explored. In this paper, we demonstrate how the recently developed semis-mooth Newton-based augmented Lagrangian framework can be extended to solve a linear equality-constrained Lasso model. A key technical challenge that is not present in prior works is the lack of strong convexity in our dual problem, which we overcome by adopting a regularization strategy. We show that under mild assumptions, our proposed method will converge superlinearly. Moreover, extensive numerical experiments on both synthetic and real-world data show that our method can be substantially faster than existing first-order methods while achieving a better solution accuracy.-
dc.languageeng-
dc.relation.ispartofICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings-
dc.subjectaugmented Lagrangian-
dc.subjectconstrained Lasso-
dc.subjectsemismooth Newton-
dc.subjectsuperlinear convergence-
dc.titleAn Efficient Augmented Lagrangian-Based Method for Linear Equality-Constrained Lasso-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICASSP40776.2020.9053722-
dc.identifier.scopuseid_2-s2.0-85089237420-
dc.identifier.volume2020-May-
dc.identifier.spage5760-
dc.identifier.epage5764-
dc.identifier.isiWOS:000615970406004-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats