File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: MTFormer: Multi-task Learning via Transformer and Cross-Task Reasoning

TitleMTFormer: Multi-task Learning via Transformer and Cross-Task Reasoning
Authors
KeywordsCross-task reasoning
Multi-task learning
Transformer
Issue Date1-Jan-2022
PublisherSpringer
AbstractIn this paper, we explore the advantages of utilizing transformer structures for addressing multi-task learning (MTL). Specifically, we demonstrate that models with transformer structures are more appropriate for MTL than convolutional neural networks (CNNs), and we propose a novel transformer-based architecture named MTFormer for MTL. In the framework, multiple tasks share the same transformer encoder and transformer decoder, and lightweight branches are introduced to harvest task-specific outputs, which increases the MTL performance and reduces the time-space complexity. Furthermore, information from different task domains can benefit each other, and we conduct cross-task reasoning. We propose a cross-task attention mechanism for further boosting the MTL results. The cross-task attention mechanism brings little parameters and computations while introducing extra performance improvements. Besides, we design a self-supervised cross-task contrastive learning algorithm for further boosting the MTL performance. Extensive experiments are conducted on two multi-task learning datasets, on which MTFormer achieves state-of-the-art results with limited network parameters and computations. It also demonstrates significant superiorities for few-shot learning and zero-shot learning.
Persistent Identifierhttp://hdl.handle.net/10722/333855
ISSN
2023 SCImago Journal Rankings: 0.606
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorXu, XG-
dc.contributor.authorZhao, HS-
dc.contributor.authorVineet, V-
dc.contributor.authorLim, SN-
dc.contributor.authorTorralba, A-
dc.date.accessioned2023-10-06T08:39:38Z-
dc.date.available2023-10-06T08:39:38Z-
dc.date.issued2022-01-01-
dc.identifier.issn0302-9743-
dc.identifier.urihttp://hdl.handle.net/10722/333855-
dc.description.abstractIn this paper, we explore the advantages of utilizing transformer structures for addressing multi-task learning (MTL). Specifically, we demonstrate that models with transformer structures are more appropriate for MTL than convolutional neural networks (CNNs), and we propose a novel transformer-based architecture named MTFormer for MTL. In the framework, multiple tasks share the same transformer encoder and transformer decoder, and lightweight branches are introduced to harvest task-specific outputs, which increases the MTL performance and reduces the time-space complexity. Furthermore, information from different task domains can benefit each other, and we conduct cross-task reasoning. We propose a cross-task attention mechanism for further boosting the MTL results. The cross-task attention mechanism brings little parameters and computations while introducing extra performance improvements. Besides, we design a self-supervised cross-task contrastive learning algorithm for further boosting the MTL performance. Extensive experiments are conducted on two multi-task learning datasets, on which MTFormer achieves state-of-the-art results with limited network parameters and computations. It also demonstrates significant superiorities for few-shot learning and zero-shot learning.-
dc.languageeng-
dc.publisherSpringer-
dc.relation.ispartof17th European Conference on Computer Vision (ECCV) (23/10/2022, Tel Aviv)-
dc.subjectCross-task reasoning-
dc.subjectMulti-task learning-
dc.subjectTransformer-
dc.titleMTFormer: Multi-task Learning via Transformer and Cross-Task Reasoning-
dc.typeConference_Paper-
dc.identifier.doi10.1007/978-3-031-19812-0_18-
dc.identifier.scopuseid_2-s2.0-85142734874-
dc.identifier.volume13687-
dc.identifier.spage304-
dc.identifier.epage321-
dc.identifier.eissn1611-3349-
dc.identifier.isiWOS:000903590200018-
dc.publisher.placeCHAM-
dc.identifier.issnl0302-9743-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats