File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
  • Find via Find It@HKUL
Supplementary

Article: Boosted Kernel Ridge Regression: Optimal Learning Rates and Early Stopping

TitleBoosted Kernel Ridge Regression: Optimal Learning Rates and Early Stopping
Authors
Issue Date1-Feb-2019
PublisherJournal of Machine Learning Research
Citation
Journal of Machine Learning Research, 2019, v. 20, n. 46 How to Cite?
Abstract

In this paper, we introduce a learning algorithm, boosted kernel ridge regression (BKRR), that combines L2-Boosting with the kernel ridge regression (KRR). We analyze the learning performance of this algorithm in the framework of learning theory. We show that BKRR provides a new bias-variance trade-off via tuning the number of boosting iterations, which is different from KRR via adjusting the regularization parameter. A (semi-)exponential bias-variance trade-off is derived for BKRR, exhibiting a stable relationship between the generalization error and the number of iterations. Furthermore, an adaptive stopping rule is proposed, with which BKRR achieves the optimal learning rate without saturation. 


Persistent Identifierhttp://hdl.handle.net/10722/354528
ISSN
2023 Impact Factor: 4.3
2023 SCImago Journal Rankings: 2.796

 

DC FieldValueLanguage
dc.contributor.authorLin, Shao-Bo-
dc.contributor.authorLei, Yunwen-
dc.contributor.authorZhou, Ding-Xuan-
dc.date.accessioned2025-02-12T00:35:17Z-
dc.date.available2025-02-12T00:35:17Z-
dc.date.issued2019-02-01-
dc.identifier.citationJournal of Machine Learning Research, 2019, v. 20, n. 46-
dc.identifier.issn1532-4435-
dc.identifier.urihttp://hdl.handle.net/10722/354528-
dc.description.abstract<p>In this paper, we introduce a learning algorithm, boosted kernel ridge regression (BKRR), that combines L2-Boosting with the kernel ridge regression (KRR). We analyze the learning performance of this algorithm in the framework of learning theory. We show that BKRR provides a new bias-variance trade-off via tuning the number of boosting iterations, which is different from KRR via adjusting the regularization parameter. A (semi-)exponential bias-variance trade-off is derived for BKRR, exhibiting a stable relationship between the generalization error and the number of iterations. Furthermore, an adaptive stopping rule is proposed, with which BKRR achieves the optimal learning rate without saturation. <br></p>-
dc.languageeng-
dc.publisherJournal of Machine Learning Research-
dc.relation.ispartofJournal of Machine Learning Research-
dc.titleBoosted Kernel Ridge Regression: Optimal Learning Rates and Early Stopping-
dc.typeArticle-
dc.identifier.volume20-
dc.identifier.issue46-
dc.identifier.eissn1533-7928-
dc.identifier.issnl1532-4435-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats