File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Generalization Guarantee of SGD for Pairwise Learning

TitleGeneralization Guarantee of SGD for Pairwise Learning
Authors
Issue Date2021
Citation
Advances in Neural Information Processing Systems, 2021, v. 26, p. 21216-21228 How to Cite?
AbstractRecently, there is a growing interest in studying pairwise learning since it includes many important machine learning tasks as specific examples, e.g., metric learning, AUC maximization and ranking. While stochastic gradient descent (SGD) is an efficient method, there is a lacking study on its generalization behavior for pairwise learning. In this paper, we present a systematic study on the generalization analysis of SGD for pairwise learning to understand the balance between generalization and optimization. We develop a novel high-probability generalization bound for uniformly-stable algorithms to incorporate the variance information for better generalization, based on which we establish the first nonsmooth learning algorithm to achieve almost optimal high-probability and dimension-independent excess risk bounds with O(n) gradient computations. We consider both convex and nonconvex pairwise learning problems. Our stability analysis for convex problems shows how the interpolation can help generalization. We establish a uniform convergence of gradients, and apply it to derive the first excess risk bounds on population gradients for nonconvex pairwise learning. Finally, we extend our stability analysis to pairwise learning with gradient-dominated problems.
Persistent Identifierhttp://hdl.handle.net/10722/329858
ISSN
2020 SCImago Journal Rankings: 1.399

 

DC FieldValueLanguage
dc.contributor.authorLei, Yunwen-
dc.contributor.authorLiu, Mingrui-
dc.contributor.authorYing, Yiming-
dc.date.accessioned2023-08-09T03:35:51Z-
dc.date.available2023-08-09T03:35:51Z-
dc.date.issued2021-
dc.identifier.citationAdvances in Neural Information Processing Systems, 2021, v. 26, p. 21216-21228-
dc.identifier.issn1049-5258-
dc.identifier.urihttp://hdl.handle.net/10722/329858-
dc.description.abstractRecently, there is a growing interest in studying pairwise learning since it includes many important machine learning tasks as specific examples, e.g., metric learning, AUC maximization and ranking. While stochastic gradient descent (SGD) is an efficient method, there is a lacking study on its generalization behavior for pairwise learning. In this paper, we present a systematic study on the generalization analysis of SGD for pairwise learning to understand the balance between generalization and optimization. We develop a novel high-probability generalization bound for uniformly-stable algorithms to incorporate the variance information for better generalization, based on which we establish the first nonsmooth learning algorithm to achieve almost optimal high-probability and dimension-independent excess risk bounds with O(n) gradient computations. We consider both convex and nonconvex pairwise learning problems. Our stability analysis for convex problems shows how the interpolation can help generalization. We establish a uniform convergence of gradients, and apply it to derive the first excess risk bounds on population gradients for nonconvex pairwise learning. Finally, we extend our stability analysis to pairwise learning with gradient-dominated problems.-
dc.languageeng-
dc.relation.ispartofAdvances in Neural Information Processing Systems-
dc.titleGeneralization Guarantee of SGD for Pairwise Learning-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85131811176-
dc.identifier.volume26-
dc.identifier.spage21216-
dc.identifier.epage21228-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats