File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Article: High-Dimensional L2-Boosting: Rate of Convergence
| Title | High-Dimensional L2-Boosting: Rate of Convergence |
|---|---|
| Authors | |
| Issue Date | 1-Mar-2025 |
| Publisher | Microtome Publishing |
| Citation | Journal of Machine Learning Research, 2025, v. 26, n. 89, p. 1-54 How to Cite? |
| Abstract | Boosting is one of the most significant developments in machine learning. This paper studies the rate of convergence of L2-Boosting in a high-dimensional setting under early stopping. We close a gap in the literature and provide the rate of convergence of L2-Boosting in a high-dimensional setting under approximate sparsity and without beta-min condition. We also show that the rate of convergence of the classical L2-Boosting depends on the design matrix described by a sparse eigenvalue condition. To show the latter results, we derive new, improved approximation results for the pure greedy algorithm, based on analyzing the revisiting behavior of L2-Boosting. These results might be of independent interest. Moreover, we introduce so-called \restricted L2-Boosting". The restricted L2-Boosting algorithm sticks to the set of the previously chosen variables, exploits the information contained in these variables first and then only occasionally allows to add new variables to this set. We derive the rate of convergence for restricted L2-Boosting under early stopping which is close to the convergence rate of Lasso in an approximate sparse, high-dimensional setting without beta-min condition. We also introduce feasible rules for early stopping, which can be easily implemented and used in applied work. Finally, we present simulation studies to illustrate the relevance of our theoretical results and to provide insights into the practical aspects of boosting. In these simulation studies, L2-Boosting clearly outperforms Lasso. An empirical illustration and the proofs are contained in the Appendix. |
| Persistent Identifier | http://hdl.handle.net/10722/358552 |
| ISSN | 2023 Impact Factor: 4.3 2023 SCImago Journal Rankings: 2.796 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Luo, Ye | - |
| dc.contributor.author | Spindler, Martin | - |
| dc.contributor.author | Kueck, Jannis | - |
| dc.date.accessioned | 2025-08-07T00:32:59Z | - |
| dc.date.available | 2025-08-07T00:32:59Z | - |
| dc.date.issued | 2025-03-01 | - |
| dc.identifier.citation | Journal of Machine Learning Research, 2025, v. 26, n. 89, p. 1-54 | - |
| dc.identifier.issn | 1532-4435 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/358552 | - |
| dc.description.abstract | <p>Boosting is one of the most significant developments in machine learning. This paper studies the rate of convergence of L2-Boosting in a high-dimensional setting under early stopping. We close a gap in the literature and provide the rate of convergence of L2-Boosting in a high-dimensional setting under approximate sparsity and without beta-min condition. We also show that the rate of convergence of the classical L2-Boosting depends on the design matrix described by a sparse eigenvalue condition. To show the latter results, we derive new, improved approximation results for the pure greedy algorithm, based on analyzing the revisiting behavior of L2-Boosting. These results might be of independent interest. Moreover, we introduce so-called \restricted L2-Boosting". The restricted L2-Boosting algorithm sticks to the set of the previously chosen variables, exploits the information contained in these variables first and then only occasionally allows to add new variables to this set. We derive the rate of convergence for restricted L2-Boosting under early stopping which is close to the convergence rate of Lasso in an approximate sparse, high-dimensional setting without beta-min condition. We also introduce feasible rules for early stopping, which can be easily implemented and used in applied work. Finally, we present simulation studies to illustrate the relevance of our theoretical results and to provide insights into the practical aspects of boosting. In these simulation studies, L2-Boosting clearly outperforms Lasso. An empirical illustration and the proofs are contained in the Appendix.<br><br></p> | - |
| dc.language | eng | - |
| dc.publisher | Microtome Publishing | - |
| dc.relation.ispartof | Journal of Machine Learning Research | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.title | High-Dimensional L2-Boosting: Rate of Convergence | - |
| dc.type | Article | - |
| dc.identifier.volume | 26 | - |
| dc.identifier.issue | 89 | - |
| dc.identifier.spage | 1 | - |
| dc.identifier.epage | 54 | - |
| dc.identifier.eissn | 1533-7928 | - |
| dc.identifier.issnl | 1532-4435 | - |

