File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1080/10618600.2023.2288194
- Scopus: eid_2-s2.0-85181247181
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Extrapolated Cross-Validation for Randomized Ensembles
| Title | Extrapolated Cross-Validation for Randomized Ensembles |
|---|---|
| Authors | |
| Keywords | Bagging Distributed learning Ensemble learning Random forest Risk extrapolation Tuning and model selection |
| Issue Date | 2024 |
| Citation | Journal of Computational and Graphical Statistics, 2024, v. 33, n. 3, p. 1061-1072 How to Cite? |
| Abstract | Ensemble methods such as bagging and random forests are ubiquitous in various fields, from finance to genomics. Despite their prevalence, the question of the efficient tuning of ensemble parameters has received relatively little attention. This article introduces a cross-validation method, Extrapolated Cross-Validation (ECV), for tuning the ensemble and subsample sizes in randomized ensembles. Our method builds on two primary ingredients: initial estimators for small ensemble sizes using out-of-bag errors and a novel risk extrapolation technique that leverages the structure of prediction risk decomposition. By establishing uniform consistency of our risk extrapolation technique over ensemble and subsample sizes, we show that ECV yields δ-optimal (with respect to the oracle-tuned risk) ensembles for squared prediction risk. Our theory accommodates general predictors, only requires mild moment assumptions, and allows for high-dimensional regimes where the feature dimension grows with the sample size. As a practical case study, we employ ECV to predict surface protein abundances from gene expressions in single-cell multiomics using random forests under a computational constraint on the maximum ensemble size. Compared to sample-split and K-fold cross-validation, ECV achieves higher accuracy by avoiding sample splitting. Meanwhile, its computational cost is considerably lower owing to the use of the risk extrapolation technique. Supplementary materials for this article are available online. |
| Persistent Identifier | http://hdl.handle.net/10722/365525 |
| ISSN | 2023 Impact Factor: 1.4 2023 SCImago Journal Rankings: 1.530 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Du, Jin Hong | - |
| dc.contributor.author | Patil, Pratik | - |
| dc.contributor.author | Roeder, Kathryn | - |
| dc.contributor.author | Kuchibhotla, Arun Kumar | - |
| dc.date.accessioned | 2025-11-05T09:41:15Z | - |
| dc.date.available | 2025-11-05T09:41:15Z | - |
| dc.date.issued | 2024 | - |
| dc.identifier.citation | Journal of Computational and Graphical Statistics, 2024, v. 33, n. 3, p. 1061-1072 | - |
| dc.identifier.issn | 1061-8600 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/365525 | - |
| dc.description.abstract | Ensemble methods such as bagging and random forests are ubiquitous in various fields, from finance to genomics. Despite their prevalence, the question of the efficient tuning of ensemble parameters has received relatively little attention. This article introduces a cross-validation method, Extrapolated Cross-Validation (ECV), for tuning the ensemble and subsample sizes in randomized ensembles. Our method builds on two primary ingredients: initial estimators for small ensemble sizes using out-of-bag errors and a novel risk extrapolation technique that leverages the structure of prediction risk decomposition. By establishing uniform consistency of our risk extrapolation technique over ensemble and subsample sizes, we show that ECV yields δ-optimal (with respect to the oracle-tuned risk) ensembles for squared prediction risk. Our theory accommodates general predictors, only requires mild moment assumptions, and allows for high-dimensional regimes where the feature dimension grows with the sample size. As a practical case study, we employ ECV to predict surface protein abundances from gene expressions in single-cell multiomics using random forests under a computational constraint on the maximum ensemble size. Compared to sample-split and K-fold cross-validation, ECV achieves higher accuracy by avoiding sample splitting. Meanwhile, its computational cost is considerably lower owing to the use of the risk extrapolation technique. Supplementary materials for this article are available online. | - |
| dc.language | eng | - |
| dc.relation.ispartof | Journal of Computational and Graphical Statistics | - |
| dc.subject | Bagging | - |
| dc.subject | Distributed learning | - |
| dc.subject | Ensemble learning | - |
| dc.subject | Random forest | - |
| dc.subject | Risk extrapolation | - |
| dc.subject | Tuning and model selection | - |
| dc.title | Extrapolated Cross-Validation for Randomized Ensembles | - |
| dc.type | Article | - |
| dc.description.nature | link_to_subscribed_fulltext | - |
| dc.identifier.doi | 10.1080/10618600.2023.2288194 | - |
| dc.identifier.scopus | eid_2-s2.0-85181247181 | - |
| dc.identifier.volume | 33 | - |
| dc.identifier.issue | 3 | - |
| dc.identifier.spage | 1061 | - |
| dc.identifier.epage | 1072 | - |
| dc.identifier.eissn | 1537-2715 | - |
