File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Patch-Based Separable Transformer for Visual Recognition

TitlePatch-Based Separable Transformer for Visual Recognition
Authors
Keywordsimage classification
instance segmentation
object detection
Transformer
Issue Date1-Jul-2023
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, v. 45, n. 7, p. 9241-9247 How to Cite?
AbstractThe computational complexity of transformers limits it to be widely deployed onto frameworks for visual recognition. Recent work Dosovitskiy et al. 2021 significantly accelerates the network processing speed by reducing the resolution at the beginning of the network, however, it is still hard to be directly generalized onto other downstream tasks e.g.object detection and segmentation like CNN. In this paper, we present a transformer-based architecture retaining both the local and global interactions within the network, and can be transferable to other downstream tasks. The proposed architecture reforms the original full spatial self-attention into pixel-wise local attention and patch-wise global attention. Such factorization saves the computational cost while retaining the information of different granularities, which helps generate multi-scale features required by different tasks. By exploiting the factorized attention, we construct a Separable Transformer (SeT) for visual modeling. Experimental results show that SeT outperforms the previous state-of-the-art transformer-based approaches and its CNN counterparts on three major tasks including image classification, object detection and instance segmentation.(1)
Persistent Identifierhttp://hdl.handle.net/10722/331713
ISSN
2023 Impact Factor: 20.8
2023 SCImago Journal Rankings: 6.158
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorSun, SY-
dc.contributor.authorYue, XY-
dc.contributor.authorZhao, HS-
dc.contributor.authorTorr, PHS-
dc.contributor.authorBai, S-
dc.date.accessioned2023-09-21T06:58:14Z-
dc.date.available2023-09-21T06:58:14Z-
dc.date.issued2023-07-01-
dc.identifier.citationIEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, v. 45, n. 7, p. 9241-9247-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10722/331713-
dc.description.abstractThe computational complexity of transformers limits it to be widely deployed onto frameworks for visual recognition. Recent work Dosovitskiy et al. 2021 significantly accelerates the network processing speed by reducing the resolution at the beginning of the network, however, it is still hard to be directly generalized onto other downstream tasks e.g.object detection and segmentation like CNN. In this paper, we present a transformer-based architecture retaining both the local and global interactions within the network, and can be transferable to other downstream tasks. The proposed architecture reforms the original full spatial self-attention into pixel-wise local attention and patch-wise global attention. Such factorization saves the computational cost while retaining the information of different granularities, which helps generate multi-scale features required by different tasks. By exploiting the factorized attention, we construct a Separable Transformer (SeT) for visual modeling. Experimental results show that SeT outperforms the previous state-of-the-art transformer-based approaches and its CNN counterparts on three major tasks including image classification, object detection and instance segmentation.(1)-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Pattern Analysis and Machine Intelligence-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectimage classification-
dc.subjectinstance segmentation-
dc.subjectobject detection-
dc.subjectTransformer-
dc.titlePatch-Based Separable Transformer for Visual Recognition-
dc.typeArticle-
dc.identifier.doi10.1109/TPAMI.2022.3231725-
dc.identifier.pmid37015401-
dc.identifier.scopuseid_2-s2.0-85146254979-
dc.identifier.volume45-
dc.identifier.issue7-
dc.identifier.spage9241-
dc.identifier.epage9247-
dc.identifier.eissn1939-3539-
dc.identifier.isiWOS:001004665900085-
dc.publisher.placeLOS ALAMITOS-
dc.identifier.issnl0162-8828-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats