File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Extrapolated Cross-Validation for Randomized Ensembles

TitleExtrapolated Cross-Validation for Randomized Ensembles
Authors
KeywordsBagging
Distributed learning
Ensemble learning
Random forest
Risk extrapolation
Tuning and model selection
Issue Date2024
Citation
Journal of Computational and Graphical Statistics, 2024, v. 33, n. 3, p. 1061-1072 How to Cite?
AbstractEnsemble methods such as bagging and random forests are ubiquitous in various fields, from finance to genomics. Despite their prevalence, the question of the efficient tuning of ensemble parameters has received relatively little attention. This article introduces a cross-validation method, Extrapolated Cross-Validation (ECV), for tuning the ensemble and subsample sizes in randomized ensembles. Our method builds on two primary ingredients: initial estimators for small ensemble sizes using out-of-bag errors and a novel risk extrapolation technique that leverages the structure of prediction risk decomposition. By establishing uniform consistency of our risk extrapolation technique over ensemble and subsample sizes, we show that ECV yields δ-optimal (with respect to the oracle-tuned risk) ensembles for squared prediction risk. Our theory accommodates general predictors, only requires mild moment assumptions, and allows for high-dimensional regimes where the feature dimension grows with the sample size. As a practical case study, we employ ECV to predict surface protein abundances from gene expressions in single-cell multiomics using random forests under a computational constraint on the maximum ensemble size. Compared to sample-split and K-fold cross-validation, ECV achieves higher accuracy by avoiding sample splitting. Meanwhile, its computational cost is considerably lower owing to the use of the risk extrapolation technique. Supplementary materials for this article are available online.
Persistent Identifierhttp://hdl.handle.net/10722/365525
ISSN
2023 Impact Factor: 1.4
2023 SCImago Journal Rankings: 1.530

 

DC FieldValueLanguage
dc.contributor.authorDu, Jin Hong-
dc.contributor.authorPatil, Pratik-
dc.contributor.authorRoeder, Kathryn-
dc.contributor.authorKuchibhotla, Arun Kumar-
dc.date.accessioned2025-11-05T09:41:15Z-
dc.date.available2025-11-05T09:41:15Z-
dc.date.issued2024-
dc.identifier.citationJournal of Computational and Graphical Statistics, 2024, v. 33, n. 3, p. 1061-1072-
dc.identifier.issn1061-8600-
dc.identifier.urihttp://hdl.handle.net/10722/365525-
dc.description.abstractEnsemble methods such as bagging and random forests are ubiquitous in various fields, from finance to genomics. Despite their prevalence, the question of the efficient tuning of ensemble parameters has received relatively little attention. This article introduces a cross-validation method, Extrapolated Cross-Validation (ECV), for tuning the ensemble and subsample sizes in randomized ensembles. Our method builds on two primary ingredients: initial estimators for small ensemble sizes using out-of-bag errors and a novel risk extrapolation technique that leverages the structure of prediction risk decomposition. By establishing uniform consistency of our risk extrapolation technique over ensemble and subsample sizes, we show that ECV yields δ-optimal (with respect to the oracle-tuned risk) ensembles for squared prediction risk. Our theory accommodates general predictors, only requires mild moment assumptions, and allows for high-dimensional regimes where the feature dimension grows with the sample size. As a practical case study, we employ ECV to predict surface protein abundances from gene expressions in single-cell multiomics using random forests under a computational constraint on the maximum ensemble size. Compared to sample-split and K-fold cross-validation, ECV achieves higher accuracy by avoiding sample splitting. Meanwhile, its computational cost is considerably lower owing to the use of the risk extrapolation technique. Supplementary materials for this article are available online.-
dc.languageeng-
dc.relation.ispartofJournal of Computational and Graphical Statistics-
dc.subjectBagging-
dc.subjectDistributed learning-
dc.subjectEnsemble learning-
dc.subjectRandom forest-
dc.subjectRisk extrapolation-
dc.subjectTuning and model selection-
dc.titleExtrapolated Cross-Validation for Randomized Ensembles-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1080/10618600.2023.2288194-
dc.identifier.scopuseid_2-s2.0-85181247181-
dc.identifier.volume33-
dc.identifier.issue3-
dc.identifier.spage1061-
dc.identifier.epage1072-
dc.identifier.eissn1537-2715-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats