File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Online pairwise learning algorithms with convex loss functions

TitleOnline pairwise learning algorithms with convex loss functions
Authors
KeywordsLearning theory
Online learning
Pairwise learning
Reproducing Kernel Hilbert Space
Issue Date2017
Citation
Information Sciences, 2017, v. 406-407, p. 57-70 How to Cite?
AbstractOnline pairwise learning algorithms with general convex loss functions without regularization in a Reproducing Kernel Hilbert Space (RKHS) are investigated. Under mild conditions on loss functions and the RKHS, upper bounds for the expected excess generalization error are derived in terms of the approximation error when the stepsize sequence decays polynomially. In particular, for Lipschitz loss functions such as the hinge loss, the logistic loss and the absolute-value loss, the bounds can be of order O(formula omitted) after T iterations, while for the least squares loss, the bounds can be of order O(formula omitted) In comparison with previous works for these algorithms, a broader family of convex loss functions is studied here, and refined upper bounds are obtained.
Persistent Identifierhttp://hdl.handle.net/10722/329438
ISSN
2022 Impact Factor: 8.1
2023 SCImago Journal Rankings: 2.238
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLin, Junhong-
dc.contributor.authorLei, Yunwen-
dc.contributor.authorZhang, Bo-
dc.contributor.authorZhou, Ding Xuan-
dc.date.accessioned2023-08-09T03:32:47Z-
dc.date.available2023-08-09T03:32:47Z-
dc.date.issued2017-
dc.identifier.citationInformation Sciences, 2017, v. 406-407, p. 57-70-
dc.identifier.issn0020-0255-
dc.identifier.urihttp://hdl.handle.net/10722/329438-
dc.description.abstractOnline pairwise learning algorithms with general convex loss functions without regularization in a Reproducing Kernel Hilbert Space (RKHS) are investigated. Under mild conditions on loss functions and the RKHS, upper bounds for the expected excess generalization error are derived in terms of the approximation error when the stepsize sequence decays polynomially. In particular, for Lipschitz loss functions such as the hinge loss, the logistic loss and the absolute-value loss, the bounds can be of order O(formula omitted) after T iterations, while for the least squares loss, the bounds can be of order O(formula omitted) In comparison with previous works for these algorithms, a broader family of convex loss functions is studied here, and refined upper bounds are obtained.-
dc.languageeng-
dc.relation.ispartofInformation Sciences-
dc.subjectLearning theory-
dc.subjectOnline learning-
dc.subjectPairwise learning-
dc.subjectReproducing Kernel Hilbert Space-
dc.titleOnline pairwise learning algorithms with convex loss functions-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1016/j.ins.2017.04.022-
dc.identifier.scopuseid_2-s2.0-85017645900-
dc.identifier.volume406-407-
dc.identifier.spage57-
dc.identifier.epage70-
dc.identifier.isiWOS:000401886700005-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats