File Download

There are no files associated with this item.

Supplementary

Conference Paper: Segformer: Simple and efficient design for semantic segmentation with transformers

TitleSegformer: Simple and efficient design for semantic segmentation with transformers
Authors
KeywordsSemantic Segmentation
Transformers
Issue Date2021
PublisherNeural Information Processing Systems Foundation..
Citation
35th Conference on Neural Information Processing Systems (NeurIPS 2021) (Virtual), December 6-14, 2021. In Advances In Neural Information Processing Systems: 35th conference on neural information processing systems (NeurIPS 2021), p. 12077-12090 How to Cite?
AbstractWe present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perceptron (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our approach up to obtain a series of models from SegFormer-B0 to Segformer-B5, which reaches much better performance and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters, being 5x smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C.
DescriptionPoster Session 1
Persistent Identifierhttp://hdl.handle.net/10722/315555

 

DC FieldValueLanguage
dc.contributor.authorXie, E-
dc.contributor.authorWang, W-
dc.contributor.authorYu, Z-
dc.contributor.authorAnandkumar, A-
dc.contributor.authorAlvarez, JM-
dc.contributor.authorLuo, P-
dc.date.accessioned2022-08-19T09:00:04Z-
dc.date.available2022-08-19T09:00:04Z-
dc.date.issued2021-
dc.identifier.citation35th Conference on Neural Information Processing Systems (NeurIPS 2021) (Virtual), December 6-14, 2021. In Advances In Neural Information Processing Systems: 35th conference on neural information processing systems (NeurIPS 2021), p. 12077-12090-
dc.identifier.urihttp://hdl.handle.net/10722/315555-
dc.descriptionPoster Session 1-
dc.description.abstractWe present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perceptron (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our approach up to obtain a series of models from SegFormer-B0 to Segformer-B5, which reaches much better performance and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters, being 5x smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C.-
dc.languageeng-
dc.publisherNeural Information Processing Systems Foundation..-
dc.relation.ispartofAdvances In Neural Information Processing Systems: 35th conference on neural information processing systems (NeurIPS 2021)-
dc.subjectSemantic Segmentation-
dc.subjectTransformers-
dc.titleSegformer: Simple and efficient design for semantic segmentation with transformers-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.hkuros335603-
dc.identifier.spage12077-
dc.identifier.epage12090-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats