File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Boosted kernel ridge regression: Optimal learning rates and early stopping

TitleBoosted kernel ridge regression: Optimal learning rates and early stopping
Authors
KeywordsBoosting
Integral operator
Kernel ridge regression
Learning theory
Issue Date2019
Citation
Journal of Machine Learning Research, 2019, v. 20 How to Cite?
AbstractIn this paper, we introduce a learning algorithm, boosted kernel ridge regression (BKRR), that combines L2-Boosting with the kernel ridge regression (KRR). We analyze the learning performance of this algorithm in the framework of learning theory. We show that BKRR provides a new bias-variance trade-off via tuning the number of boosting iterations, which is different from KRR via adjusting the regularization parameter. A (semi-)exponential bias-variance trade-off is derived for BKRR, exhibiting a stable relationship between the generalization error and the number of iterations. Furthermore, an adaptive stopping rule is proposed, with which BKRR achieves the optimal learning rate without saturation.
Persistent Identifierhttp://hdl.handle.net/10722/329845
ISSN
2023 Impact Factor: 4.3
2023 SCImago Journal Rankings: 2.796

 

DC FieldValueLanguage
dc.contributor.authorLin, Shao Bo-
dc.contributor.authorLei, Yunwen-
dc.contributor.authorZhou, Ding Xuan-
dc.date.accessioned2023-08-09T03:35:45Z-
dc.date.available2023-08-09T03:35:45Z-
dc.date.issued2019-
dc.identifier.citationJournal of Machine Learning Research, 2019, v. 20-
dc.identifier.issn1532-4435-
dc.identifier.urihttp://hdl.handle.net/10722/329845-
dc.description.abstractIn this paper, we introduce a learning algorithm, boosted kernel ridge regression (BKRR), that combines L2-Boosting with the kernel ridge regression (KRR). We analyze the learning performance of this algorithm in the framework of learning theory. We show that BKRR provides a new bias-variance trade-off via tuning the number of boosting iterations, which is different from KRR via adjusting the regularization parameter. A (semi-)exponential bias-variance trade-off is derived for BKRR, exhibiting a stable relationship between the generalization error and the number of iterations. Furthermore, an adaptive stopping rule is proposed, with which BKRR achieves the optimal learning rate without saturation.-
dc.languageeng-
dc.relation.ispartofJournal of Machine Learning Research-
dc.subjectBoosting-
dc.subjectIntegral operator-
dc.subjectKernel ridge regression-
dc.subjectLearning theory-
dc.titleBoosted kernel ridge regression: Optimal learning rates and early stopping-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85072647947-
dc.identifier.volume20-
dc.identifier.eissn1533-7928-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats