File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Universal learning using free multivariate splines

TitleUniversal learning using free multivariate splines
Authors
KeywordsComplexity regularization
Free multivariate spline
Learning theory
Rademacher complexity
Issue Date2013
Citation
Neurocomputing, 2013, v. 119, p. 253-263 How to Cite?
AbstractThis paper discusses the problem of universal learning using free multivariate splines of order 1. Universal means that the learning algorithm does not involve a priori assumption on the regularity of the target function. We characterize the complexity of the space of free multivariate splines by the remarkable notion called Rademacher complexity, based on which a penalized empirical risk is constructed as an estimation of the expected risk for the candidate model. Our Rademacher complexity bounds are tight within a logarithmic factor. It is shown that the prediction rule minimizing the penalized empirical risk achieves a favorable balance between the approximation and estimation error. By resorting to the powerful techniques in approximation theory to approach the approximation error, we also derive bounds on the generalization error in terms of the sample size, for a large class of loss functions. © 2013 Elsevier B.V.
Persistent Identifierhttp://hdl.handle.net/10722/329282
ISSN
2023 Impact Factor: 5.5
2023 SCImago Journal Rankings: 1.815
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLei, Yunwen-
dc.contributor.authorDing, Lixin-
dc.contributor.authorWu, Weili-
dc.date.accessioned2023-08-09T03:31:41Z-
dc.date.available2023-08-09T03:31:41Z-
dc.date.issued2013-
dc.identifier.citationNeurocomputing, 2013, v. 119, p. 253-263-
dc.identifier.issn0925-2312-
dc.identifier.urihttp://hdl.handle.net/10722/329282-
dc.description.abstractThis paper discusses the problem of universal learning using free multivariate splines of order 1. Universal means that the learning algorithm does not involve a priori assumption on the regularity of the target function. We characterize the complexity of the space of free multivariate splines by the remarkable notion called Rademacher complexity, based on which a penalized empirical risk is constructed as an estimation of the expected risk for the candidate model. Our Rademacher complexity bounds are tight within a logarithmic factor. It is shown that the prediction rule minimizing the penalized empirical risk achieves a favorable balance between the approximation and estimation error. By resorting to the powerful techniques in approximation theory to approach the approximation error, we also derive bounds on the generalization error in terms of the sample size, for a large class of loss functions. © 2013 Elsevier B.V.-
dc.languageeng-
dc.relation.ispartofNeurocomputing-
dc.subjectComplexity regularization-
dc.subjectFree multivariate spline-
dc.subjectLearning theory-
dc.subjectRademacher complexity-
dc.titleUniversal learning using free multivariate splines-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1016/j.neucom.2013.03.033-
dc.identifier.scopuseid_2-s2.0-84881553680-
dc.identifier.volume119-
dc.identifier.spage253-
dc.identifier.epage263-
dc.identifier.eissn1872-8286-
dc.identifier.isiWOS:000323851800031-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats