File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Convergence of unregularized online learning algorithms

TitleConvergence of unregularized online learning algorithms
Authors
KeywordsConvergence analysis
Learning theory
Online learning
Reproducing kernel Hilbert space
Issue Date2018
Citation
Journal of Machine Learning Research, 2018, v. 18, p. 1-33 How to Cite?
AbstractIn this paper we study the convergence of online gradient descent algorithms in reproducing kernel Hilbert spaces (RKHSs) without regularization. We establish a sufficient condition and a necessary condition for the convergence of excess generalization errors in expectation. A sufficient condition for the almost sure convergence is also given. With high probability, we provide explicit convergence rates of the excess generalization errors for both averaged iterates and the last iterate, which in turn also imply convergence rates with probability one. To our best knowledge, this is the first high-probability convergence rate for the last iterate of online gradient descent algorithms in the general convex setting. Without any boundedness assumptions on iterates, our results are derived by a novel use of two measures of the algorithm's one-step progress, respectively by generalization errors and by distances in RKHSs, where the variances of the involved martingales are cancelled out by the descent property of the algorithm.
Persistent Identifierhttp://hdl.handle.net/10722/329840
ISSN
2023 Impact Factor: 4.3
2023 SCImago Journal Rankings: 2.796

 

DC FieldValueLanguage
dc.contributor.authorLei, Yunwen-
dc.contributor.authorShi, Lei-
dc.contributor.authorGuo, Zheng Chu-
dc.date.accessioned2023-08-09T03:35:43Z-
dc.date.available2023-08-09T03:35:43Z-
dc.date.issued2018-
dc.identifier.citationJournal of Machine Learning Research, 2018, v. 18, p. 1-33-
dc.identifier.issn1532-4435-
dc.identifier.urihttp://hdl.handle.net/10722/329840-
dc.description.abstractIn this paper we study the convergence of online gradient descent algorithms in reproducing kernel Hilbert spaces (RKHSs) without regularization. We establish a sufficient condition and a necessary condition for the convergence of excess generalization errors in expectation. A sufficient condition for the almost sure convergence is also given. With high probability, we provide explicit convergence rates of the excess generalization errors for both averaged iterates and the last iterate, which in turn also imply convergence rates with probability one. To our best knowledge, this is the first high-probability convergence rate for the last iterate of online gradient descent algorithms in the general convex setting. Without any boundedness assumptions on iterates, our results are derived by a novel use of two measures of the algorithm's one-step progress, respectively by generalization errors and by distances in RKHSs, where the variances of the involved martingales are cancelled out by the descent property of the algorithm.-
dc.languageeng-
dc.relation.ispartofJournal of Machine Learning Research-
dc.subjectConvergence analysis-
dc.subjectLearning theory-
dc.subjectOnline learning-
dc.subjectReproducing kernel Hilbert space-
dc.titleConvergence of unregularized online learning algorithms-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85048749125-
dc.identifier.volume18-
dc.identifier.spage1-
dc.identifier.epage33-
dc.identifier.eissn1533-7928-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats