File Download

There are no files associated with this item.

Supplementary

Conference Paper: Nonsmooth Low-Rank Matrix Recovery: Methodology, Theory and Algorithm

TitleNonsmooth Low-Rank Matrix Recovery: Methodology, Theory and Algorithm
Authors
Issue Date2021
Citation
Proceedings of the Future Technologies Conference, v. 1, p. 848–862 How to Cite?
AbstractMany interesting problems in statistics and machine learning can be written as minx F(x)=f(x)+g(x), where x is the model parameter, f is the loss and g is the regularizer. Examples include regularized regression in high-dimensional feature selection and low-rank matrix/tensor factorization. Sometimes the loss function and/or the regularizer is nonsmooth due to the nature of the problem, for example, f(x) could be quantile loss to induce some robustness or to put more focus on different parts of the distribution other than the mean. In this paper we propose a general framework to deal with situations when you have nonsmooth loss or regularizer. Specifically we use low-rank matrix recovery as an example to demonstrate the main idea. The framework involves two main steps: the optimal smoothing of the loss function or regularizer and then a gradient based algorithm to solve the smoothed loss. The proposed smoothing pipeline is highly flexible, computationally efficient, easy to implement and well suited for problems with high-dimensional data. Strong theoretical convergence guarantee has also been established. In the numerical studies, we used L1 loss as an example to illustrate the practicability of the proposed pipeline. Various state-of-the-art algorithms such as Adam, NAG and YellowFin all show promising results for the smoothed loss.
Persistent Identifierhttp://hdl.handle.net/10722/320352

 

DC FieldValueLanguage
dc.contributor.authorTu, W-
dc.contributor.authorLiu, P-
dc.contributor.authorLiu, Y-
dc.contributor.authorLi, G-
dc.contributor.authorJiang, B-
dc.contributor.authorKong, L-
dc.date.accessioned2022-10-21T07:51:42Z-
dc.date.available2022-10-21T07:51:42Z-
dc.date.issued2021-
dc.identifier.citationProceedings of the Future Technologies Conference, v. 1, p. 848–862-
dc.identifier.urihttp://hdl.handle.net/10722/320352-
dc.description.abstractMany interesting problems in statistics and machine learning can be written as minx F(x)=f(x)+g(x), where x is the model parameter, f is the loss and g is the regularizer. Examples include regularized regression in high-dimensional feature selection and low-rank matrix/tensor factorization. Sometimes the loss function and/or the regularizer is nonsmooth due to the nature of the problem, for example, f(x) could be quantile loss to induce some robustness or to put more focus on different parts of the distribution other than the mean. In this paper we propose a general framework to deal with situations when you have nonsmooth loss or regularizer. Specifically we use low-rank matrix recovery as an example to demonstrate the main idea. The framework involves two main steps: the optimal smoothing of the loss function or regularizer and then a gradient based algorithm to solve the smoothed loss. The proposed smoothing pipeline is highly flexible, computationally efficient, easy to implement and well suited for problems with high-dimensional data. Strong theoretical convergence guarantee has also been established. In the numerical studies, we used L1 loss as an example to illustrate the practicability of the proposed pipeline. Various state-of-the-art algorithms such as Adam, NAG and YellowFin all show promising results for the smoothed loss.-
dc.languageeng-
dc.relation.ispartofProceedings of the Future Technologies Conference-
dc.titleNonsmooth Low-Rank Matrix Recovery: Methodology, Theory and Algorithm-
dc.typeConference_Paper-
dc.identifier.emailLi, G: gdli@hku.hk-
dc.identifier.authorityLi, G=rp00738-
dc.identifier.hkuros339990-
dc.identifier.volume1-
dc.identifier.spage848–862-
dc.identifier.epage848–862-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats