File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Local rademacher complexity-based learning guarantees for multi-task learning

TitleLocal rademacher complexity-based learning guarantees for multi-task learning
Authors
KeywordsExcess Risk Bounds
Local Rademacher Complexity
Multi-task Learning
Issue Date2018
Citation
Journal of Machine Learning Research, 2018, v. 19 How to Cite?
AbstractWe show a Talagrand-type concentration inequality for Multi-Task Learning (MTL), with which we establish sharp excess risk bounds for MTL in terms of the Local Rademacher Complexity (LRC). We also give a new bound on the LRC for any norm regularized hypothesis classes, which applies not only to MTL, but also to the standard Single-Task Learning (STL) setting. By combining both results, one can easily derive fast-rate bounds on the excess risk for many prominent MTL methods, including-as we demonstrate-Schatten norm, group norm, and graph regularized MTL. The derived bounds reflect a relationship akin to a conservation law of asymptotic convergence rates. When compared to the rates obtained via a traditional, global Rademacher analysis, this very relationship allows for trading off slower rates with respect to the number of tasks for faster rates with respect to the number of available samples per task.
Persistent Identifierhttp://hdl.handle.net/10722/329522
ISSN
2023 Impact Factor: 4.3
2023 SCImago Journal Rankings: 2.796

 

DC FieldValueLanguage
dc.contributor.authorYousefi, Niloofar-
dc.contributor.authorLei, Yunwen-
dc.contributor.authorKloft, Marius-
dc.contributor.authorMollaghasemi, Mansooreh-
dc.contributor.authorAnagnostopoulos, Georgios C.-
dc.date.accessioned2023-08-09T03:33:24Z-
dc.date.available2023-08-09T03:33:24Z-
dc.date.issued2018-
dc.identifier.citationJournal of Machine Learning Research, 2018, v. 19-
dc.identifier.issn1532-4435-
dc.identifier.urihttp://hdl.handle.net/10722/329522-
dc.description.abstractWe show a Talagrand-type concentration inequality for Multi-Task Learning (MTL), with which we establish sharp excess risk bounds for MTL in terms of the Local Rademacher Complexity (LRC). We also give a new bound on the LRC for any norm regularized hypothesis classes, which applies not only to MTL, but also to the standard Single-Task Learning (STL) setting. By combining both results, one can easily derive fast-rate bounds on the excess risk for many prominent MTL methods, including-as we demonstrate-Schatten norm, group norm, and graph regularized MTL. The derived bounds reflect a relationship akin to a conservation law of asymptotic convergence rates. When compared to the rates obtained via a traditional, global Rademacher analysis, this very relationship allows for trading off slower rates with respect to the number of tasks for faster rates with respect to the number of available samples per task.-
dc.languageeng-
dc.relation.ispartofJournal of Machine Learning Research-
dc.subjectExcess Risk Bounds-
dc.subjectLocal Rademacher Complexity-
dc.subjectMulti-task Learning-
dc.titleLocal rademacher complexity-based learning guarantees for multi-task learning-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85053376688-
dc.identifier.volume19-
dc.identifier.eissn1533-7928-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats