File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: MTFormer: Multi-task Learning via Transformer and Cross-Task Reasoning

TitleMTFormer: Multi-task Learning via Transformer and Cross-Task Reasoning
Authors
KeywordsCross-task reasoning
Multi-task learning
Transformer
Issue Date2022
Citation
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2022, v. 13687 LNCS, p. 304-321 How to Cite?
AbstractIn this paper, we explore the advantages of utilizing transformer structures for addressing multi-task learning (MTL). Specifically, we demonstrate that models with transformer structures are more appropriate for MTL than convolutional neural networks (CNNs), and we propose a novel transformer-based architecture named MTFormer for MTL. In the framework, multiple tasks share the same transformer encoder and transformer decoder, and lightweight branches are introduced to harvest task-specific outputs, which increases the MTL performance and reduces the time-space complexity. Furthermore, information from different task domains can benefit each other, and we conduct cross-task reasoning. We propose a cross-task attention mechanism for further boosting the MTL results. The cross-task attention mechanism brings little parameters and computations while introducing extra performance improvements. Besides, we design a self-supervised cross-task contrastive learning algorithm for further boosting the MTL performance. Extensive experiments are conducted on two multi-task learning datasets, on which MTFormer achieves state-of-the-art results with limited network parameters and computations. It also demonstrates significant superiorities for few-shot learning and zero-shot learning.
Persistent Identifierhttp://hdl.handle.net/10722/333567
ISSN
2023 SCImago Journal Rankings: 0.606
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorXu, Xiaogang-
dc.contributor.authorZhao, Hengshuang-
dc.contributor.authorVineet, Vibhav-
dc.contributor.authorLim, Ser Nam-
dc.contributor.authorTorralba, Antonio-
dc.date.accessioned2023-10-06T05:20:38Z-
dc.date.available2023-10-06T05:20:38Z-
dc.date.issued2022-
dc.identifier.citationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2022, v. 13687 LNCS, p. 304-321-
dc.identifier.issn0302-9743-
dc.identifier.urihttp://hdl.handle.net/10722/333567-
dc.description.abstractIn this paper, we explore the advantages of utilizing transformer structures for addressing multi-task learning (MTL). Specifically, we demonstrate that models with transformer structures are more appropriate for MTL than convolutional neural networks (CNNs), and we propose a novel transformer-based architecture named MTFormer for MTL. In the framework, multiple tasks share the same transformer encoder and transformer decoder, and lightweight branches are introduced to harvest task-specific outputs, which increases the MTL performance and reduces the time-space complexity. Furthermore, information from different task domains can benefit each other, and we conduct cross-task reasoning. We propose a cross-task attention mechanism for further boosting the MTL results. The cross-task attention mechanism brings little parameters and computations while introducing extra performance improvements. Besides, we design a self-supervised cross-task contrastive learning algorithm for further boosting the MTL performance. Extensive experiments are conducted on two multi-task learning datasets, on which MTFormer achieves state-of-the-art results with limited network parameters and computations. It also demonstrates significant superiorities for few-shot learning and zero-shot learning.-
dc.languageeng-
dc.relation.ispartofLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)-
dc.subjectCross-task reasoning-
dc.subjectMulti-task learning-
dc.subjectTransformer-
dc.titleMTFormer: Multi-task Learning via Transformer and Cross-Task Reasoning-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/978-3-031-19812-0_18-
dc.identifier.scopuseid_2-s2.0-85142734874-
dc.identifier.volume13687 LNCS-
dc.identifier.spage304-
dc.identifier.epage321-
dc.identifier.eissn1611-3349-
dc.identifier.isiWOS:000903590200018-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats