File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Fine-grained Generalization Analysis of Vector-valued Learning

TitleFine-grained Generalization Analysis of Vector-valued Learning
Authors
Issue Date2021
Citation
35th AAAI Conference on Artificial Intelligence, AAAI 2021, 2021, v. 12A, p. 10338-10346 How to Cite?
AbstractMany fundamental machine learning tasks can be formulated as a problem of learning with vector-valued functions, where we learn multiple scalar-valued functions together. Although there is some generalization analysis on different specific algorithms under the empirical risk minimization principle, a unifying analysis of vector-valued learning under a regularization framework is still lacking. In this paper, we initiate the generalization analysis of regularized vector-valued learning algorithms by presenting bounds with a mild dependency on the output dimension and a fast rate on the sample size. Our discussions relax the existing assumptions on the restrictive constraint of hypothesis spaces, smoothness of loss functions and low-noise condition. To understand the interaction between optimization and learning, we further use our results to derive the first generalization bounds for stochastic gradient descent with vector-valued functions. We apply our general results to multi-class classification and multi-label classification, which yield the first bounds with a logarithmic dependency on the output dimension for extreme multi-label classification with the Frobenius regularization. As a byproduct, we derive a Rademacher complexity bound for loss function classes defined in terms of a general strongly convex function.
Persistent Identifierhttp://hdl.handle.net/10722/329722

 

DC FieldValueLanguage
dc.contributor.authorWu, Liang-
dc.contributor.authorLedent, Antoine-
dc.contributor.authorLei, Yunwen-
dc.contributor.authorKloft, Marius-
dc.date.accessioned2023-08-09T03:34:52Z-
dc.date.available2023-08-09T03:34:52Z-
dc.date.issued2021-
dc.identifier.citation35th AAAI Conference on Artificial Intelligence, AAAI 2021, 2021, v. 12A, p. 10338-10346-
dc.identifier.urihttp://hdl.handle.net/10722/329722-
dc.description.abstractMany fundamental machine learning tasks can be formulated as a problem of learning with vector-valued functions, where we learn multiple scalar-valued functions together. Although there is some generalization analysis on different specific algorithms under the empirical risk minimization principle, a unifying analysis of vector-valued learning under a regularization framework is still lacking. In this paper, we initiate the generalization analysis of regularized vector-valued learning algorithms by presenting bounds with a mild dependency on the output dimension and a fast rate on the sample size. Our discussions relax the existing assumptions on the restrictive constraint of hypothesis spaces, smoothness of loss functions and low-noise condition. To understand the interaction between optimization and learning, we further use our results to derive the first generalization bounds for stochastic gradient descent with vector-valued functions. We apply our general results to multi-class classification and multi-label classification, which yield the first bounds with a logarithmic dependency on the output dimension for extreme multi-label classification with the Frobenius regularization. As a byproduct, we derive a Rademacher complexity bound for loss function classes defined in terms of a general strongly convex function.-
dc.languageeng-
dc.relation.ispartof35th AAAI Conference on Artificial Intelligence, AAAI 2021-
dc.titleFine-grained Generalization Analysis of Vector-valued Learning-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85108779458-
dc.identifier.volume12A-
dc.identifier.spage10338-
dc.identifier.epage10346-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats