File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Early Stopping for Iterative Regularization with General Loss Functions

TitleEarly Stopping for Iterative Regularization with General Loss Functions
Authors
Keywordscross-validation
early stopping
iterative regularization
reproducing kernel Hilbert spaces
stopping rule
Issue Date2022
Citation
Journal of Machine Learning Research, 2022, v. 23, article no. 339 How to Cite?
AbstractIn this paper, we investigate the early stopping strategy for the iterative regularization technique, which is based on gradient descent of convex loss functions in reproducing kernel Hilbert spaces without an explicit regularization term. This work shows that projecting the last iterate of the stopping time produces an estimator that can improve the generalization ability. Using the upper bound of the generalization errors, we establish a close link between the iterative regularization and Tikhonov regularization scheme and explain theoretically why the two schemes have similar regularization paths in the existing numerical simulations. We introduce a data-dependent way based on cross-validation to select the stopping time. We prove that the a-posteriori selection way can retain the comparable generalization errors to those obtained by our stopping rules with a-prior parameters.
Persistent Identifierhttp://hdl.handle.net/10722/329923
ISSN
2023 Impact Factor: 4.3
2023 SCImago Journal Rankings: 2.796

 

DC FieldValueLanguage
dc.contributor.authorHu, Ting-
dc.contributor.authorLei, Yunwen-
dc.date.accessioned2023-08-09T03:36:28Z-
dc.date.available2023-08-09T03:36:28Z-
dc.date.issued2022-
dc.identifier.citationJournal of Machine Learning Research, 2022, v. 23, article no. 339-
dc.identifier.issn1532-4435-
dc.identifier.urihttp://hdl.handle.net/10722/329923-
dc.description.abstractIn this paper, we investigate the early stopping strategy for the iterative regularization technique, which is based on gradient descent of convex loss functions in reproducing kernel Hilbert spaces without an explicit regularization term. This work shows that projecting the last iterate of the stopping time produces an estimator that can improve the generalization ability. Using the upper bound of the generalization errors, we establish a close link between the iterative regularization and Tikhonov regularization scheme and explain theoretically why the two schemes have similar regularization paths in the existing numerical simulations. We introduce a data-dependent way based on cross-validation to select the stopping time. We prove that the a-posteriori selection way can retain the comparable generalization errors to those obtained by our stopping rules with a-prior parameters.-
dc.languageeng-
dc.relation.ispartofJournal of Machine Learning Research-
dc.subjectcross-validation-
dc.subjectearly stopping-
dc.subjectiterative regularization-
dc.subjectreproducing kernel Hilbert spaces-
dc.subjectstopping rule-
dc.titleEarly Stopping for Iterative Regularization with General Loss Functions-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85148061967-
dc.identifier.volume23-
dc.identifier.spagearticle no. 339-
dc.identifier.epagearticle no. 339-
dc.identifier.eissn1533-7928-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats