File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Sorted concave penalized regression

TitleSorted concave penalized regression
Authors
KeywordsConcave penalties
Local convex approximation
Minimax rate
Penalized least squares
Restricted eigenvalue
Signal strength
Slope
Sorted penalties
Issue Date2019
Citation
Annals of Statistics, 2019, v. 47, n. 6, p. 3069-3098 How to Cite?
AbstractThe Lasso is biased. Concave penalized least squares estimation (PLSE) takes advantage of signal strength to reduce this bias, leading to sharper error bounds in prediction, coefficient estimation and variable selection. For prediction and estimation, the bias of the Lasso can be also reduced by taking a smaller penalty level than what selection consistency requires, but such smaller penalty level depends on the sparsity of the true coefficient vector. The sorted ℓ1 penalized estimation (Slope) was proposed for adaptation to such smaller penalty levels. However, the advantages of concave PLSE and Slope do not subsume each other. We propose sorted concave penalized estimation to combine the advantages of concave and sorted penalizations. We prove that sorted concave penalties adaptively choose the smaller penalty level and at the same time benefits from signal strength, especially when a significant proportion of signals are stronger than the corresponding adaptively selected penalty levels. A local convex approximation for sorted concave penalties, which extends the local linear and quadratic approximations for separable concave penalties, is developed to facilitate the computation of sorted concave PLSE and proven to possess desired prediction and estimation error bounds. Our analysis of prediction and estimation errors requires the restricted eigenvalue condition on the design, not beyond, and provides selection consistency under a required minimum signal strength condition in addition. Thus, our results also sharpens existing results on concave PLSE by removing the upper sparse eigenvalue component of the sparse Riesz condition.
Persistent Identifierhttp://hdl.handle.net/10722/318812
ISSN
2023 Impact Factor: 3.2
2023 SCImago Journal Rankings: 5.335
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorFeng, Long-
dc.contributor.authorZhang, Cun Hui-
dc.date.accessioned2022-10-11T12:24:37Z-
dc.date.available2022-10-11T12:24:37Z-
dc.date.issued2019-
dc.identifier.citationAnnals of Statistics, 2019, v. 47, n. 6, p. 3069-3098-
dc.identifier.issn0090-5364-
dc.identifier.urihttp://hdl.handle.net/10722/318812-
dc.description.abstractThe Lasso is biased. Concave penalized least squares estimation (PLSE) takes advantage of signal strength to reduce this bias, leading to sharper error bounds in prediction, coefficient estimation and variable selection. For prediction and estimation, the bias of the Lasso can be also reduced by taking a smaller penalty level than what selection consistency requires, but such smaller penalty level depends on the sparsity of the true coefficient vector. The sorted ℓ1 penalized estimation (Slope) was proposed for adaptation to such smaller penalty levels. However, the advantages of concave PLSE and Slope do not subsume each other. We propose sorted concave penalized estimation to combine the advantages of concave and sorted penalizations. We prove that sorted concave penalties adaptively choose the smaller penalty level and at the same time benefits from signal strength, especially when a significant proportion of signals are stronger than the corresponding adaptively selected penalty levels. A local convex approximation for sorted concave penalties, which extends the local linear and quadratic approximations for separable concave penalties, is developed to facilitate the computation of sorted concave PLSE and proven to possess desired prediction and estimation error bounds. Our analysis of prediction and estimation errors requires the restricted eigenvalue condition on the design, not beyond, and provides selection consistency under a required minimum signal strength condition in addition. Thus, our results also sharpens existing results on concave PLSE by removing the upper sparse eigenvalue component of the sparse Riesz condition.-
dc.languageeng-
dc.relation.ispartofAnnals of Statistics-
dc.subjectConcave penalties-
dc.subjectLocal convex approximation-
dc.subjectMinimax rate-
dc.subjectPenalized least squares-
dc.subjectRestricted eigenvalue-
dc.subjectSignal strength-
dc.subjectSlope-
dc.subjectSorted penalties-
dc.titleSorted concave penalized regression-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1214/18-AOS1759-
dc.identifier.scopuseid_2-s2.0-85078981162-
dc.identifier.volume47-
dc.identifier.issue6-
dc.identifier.spage3069-
dc.identifier.epage3098-
dc.identifier.eissn2168-8966-
dc.identifier.isiWOS:000493896800003-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats