File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Pyramid vision transformer: A versatile backbone for dense prediction without convolutions

TitlePyramid vision transformer: A versatile backbone for dense prediction without convolutions
Authors
Issue Date2021
PublisherIEEE Computer Society.
Citation
2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, October 10-17, 2021. In Proceedings: 2021 IEEE/CVF International Conference on Computer Vision: ICCV 2021, 11-17 October 2021, Virtual event, p. 548-558 How to Cite?
AbstractAlthough convolutional neural networks (CNNs) have achieved great success in computer vision, this work investigates a simpler, convolution-free backbone network use-fid for many dense prediction tasks. Unlike the recently-proposed Vision Transformer (ViT) that was designed for image classification specifically, we introduce the Pyramid Vision Transformer (PVT), which overcomes the difficulties of porting Transformer to various dense prediction tasks. PVT has several merits compared to current state of the arts. (1) Different from ViT that typically yields low-resolution outputs and incurs high computational and memory costs, PVT not only can be trained on dense partitions of an image to achieve high output resolution, which is important for dense prediction, but also uses a progressive shrinking pyramid to reduce the computations of large feature maps. (2) PVT inherits the advantages of both CNN and Transformer, making it a unified backbone for various vision tasks without convolutions, where it can be used as a direct replacement for CNN backbones. (3) We validate PVT through extensive experiments, showing that it boosts the performance of many downstream tasks, including object detection, instance and semantic segmentation. For example, with a comparable number of parameters, PVT+RetinaNet achieves 40.4 AP on the COCO dataset, surpassing ResNet50+RetinNet (36.3 AP) by 4.1 absolute AP (see Figure 2). We hope that PVT could, serre as an alternative and useful backbone for pixel-level predictions and facilitate future research.
Persistent Identifierhttp://hdl.handle.net/10722/315684
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWang, W-
dc.contributor.authorXie, E-
dc.contributor.authorLi, X-
dc.contributor.authorFan, DP-
dc.contributor.authorSong, K-
dc.contributor.authorLiang, D-
dc.contributor.authorLu, T-
dc.contributor.authorLuo, P-
dc.contributor.authorShao, L-
dc.date.accessioned2022-08-19T09:02:30Z-
dc.date.available2022-08-19T09:02:30Z-
dc.date.issued2021-
dc.identifier.citation2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, October 10-17, 2021. In Proceedings: 2021 IEEE/CVF International Conference on Computer Vision: ICCV 2021, 11-17 October 2021, Virtual event, p. 548-558-
dc.identifier.urihttp://hdl.handle.net/10722/315684-
dc.description.abstractAlthough convolutional neural networks (CNNs) have achieved great success in computer vision, this work investigates a simpler, convolution-free backbone network use-fid for many dense prediction tasks. Unlike the recently-proposed Vision Transformer (ViT) that was designed for image classification specifically, we introduce the Pyramid Vision Transformer (PVT), which overcomes the difficulties of porting Transformer to various dense prediction tasks. PVT has several merits compared to current state of the arts. (1) Different from ViT that typically yields low-resolution outputs and incurs high computational and memory costs, PVT not only can be trained on dense partitions of an image to achieve high output resolution, which is important for dense prediction, but also uses a progressive shrinking pyramid to reduce the computations of large feature maps. (2) PVT inherits the advantages of both CNN and Transformer, making it a unified backbone for various vision tasks without convolutions, where it can be used as a direct replacement for CNN backbones. (3) We validate PVT through extensive experiments, showing that it boosts the performance of many downstream tasks, including object detection, instance and semantic segmentation. For example, with a comparable number of parameters, PVT+RetinaNet achieves 40.4 AP on the COCO dataset, surpassing ResNet50+RetinNet (36.3 AP) by 4.1 absolute AP (see Figure 2). We hope that PVT could, serre as an alternative and useful backbone for pixel-level predictions and facilitate future research.-
dc.languageeng-
dc.publisherIEEE Computer Society.-
dc.relation.ispartofProceedings: 2021 IEEE/CVF International Conference on Computer Vision: ICCV 2021, 11-17 October 2021, Virtual event-
dc.rightsProceedings: 2021 IEEE/CVF International Conference on Computer Vision: ICCV 2021, 11-17 October 2021, Virtual event. Copyright © IEEE Computer Society.-
dc.titlePyramid vision transformer: A versatile backbone for dense prediction without convolutions-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.doi10.1109/ICCV48922.2021.00061-
dc.identifier.hkuros335601-
dc.identifier.spage548-
dc.identifier.epage558-
dc.identifier.isiWOS:000797698900055-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats