File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness

TitleSegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness
Authors
KeywordsAdversarial robustness
Semantic segmentation
Issue Date23-Oct-2022
PublisherSpringer
AbstractDeep neural network-based image classifications are vulnerable to adversarial perturbations. The image classifications can be easily fooled by adding artificial small and imperceptible perturbations to input images. As one of the most effective defense strategies, adversarial training was proposed to address the vulnerability of classification models, where the adversarial examples are created and injected into training data during training. The attack and defense of classification models have been intensively studied in past years. Semantic segmentation, as an extension of classifications, has also received great attention recently. Recent work shows a large number of attack iterations are required to create effective adversarial examples to fool segmentation models. The observation makes both robustness evaluation and adversarial training on segmentation models challenging. In this work, we propose an effective and efficient segmentation attack method, dubbed SegPGD. Besides, we provide a convergence analysis to show the proposed SegPGD can create more effective adversarial examples than PGD under the same number of attack iterations. Furthermore, we propose to apply our SegPGD as the underlying attack method for segmentation adversarial training. Since SegPGD can create more effective adversarial examples, the adversarial training with our SegPGD can boost the robustness of segmentation models. Our proposals are also verified with experiments on popular Segmentation model architectures and standard segmentation datasets.
Persistent Identifierhttp://hdl.handle.net/10722/333856
ISSN
2020 SCImago Journal Rankings: 0.249
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorGu, JD-
dc.contributor.authorZhao, HS-
dc.contributor.authorTresp, V-
dc.contributor.authorTorr, PHS-
dc.date.accessioned2023-10-06T08:39:38Z-
dc.date.available2023-10-06T08:39:38Z-
dc.date.issued2022-10-23-
dc.identifier.issn0302-9743-
dc.identifier.urihttp://hdl.handle.net/10722/333856-
dc.description.abstractDeep neural network-based image classifications are vulnerable to adversarial perturbations. The image classifications can be easily fooled by adding artificial small and imperceptible perturbations to input images. As one of the most effective defense strategies, adversarial training was proposed to address the vulnerability of classification models, where the adversarial examples are created and injected into training data during training. The attack and defense of classification models have been intensively studied in past years. Semantic segmentation, as an extension of classifications, has also received great attention recently. Recent work shows a large number of attack iterations are required to create effective adversarial examples to fool segmentation models. The observation makes both robustness evaluation and adversarial training on segmentation models challenging. In this work, we propose an effective and efficient segmentation attack method, dubbed SegPGD. Besides, we provide a convergence analysis to show the proposed SegPGD can create more effective adversarial examples than PGD under the same number of attack iterations. Furthermore, we propose to apply our SegPGD as the underlying attack method for segmentation adversarial training. Since SegPGD can create more effective adversarial examples, the adversarial training with our SegPGD can boost the robustness of segmentation models. Our proposals are also verified with experiments on popular Segmentation model architectures and standard segmentation datasets.-
dc.languageeng-
dc.publisherSpringer-
dc.relation.ispartof17th European Conference on Computer Vision (ECCV) (23/10/2022, Tel Aviv)-
dc.subjectAdversarial robustness-
dc.subjectSemantic segmentation-
dc.titleSegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness-
dc.typeConference_Paper-
dc.identifier.doi10.1007/978-3-031-19818-2_18-
dc.identifier.scopuseid_2-s2.0-85142768149-
dc.identifier.volume13689-
dc.identifier.spage308-
dc.identifier.epage325-
dc.identifier.eissn1611-3349-
dc.identifier.isiWOS:000903735000018-
dc.publisher.placeCHAM-
dc.identifier.issnl0302-9743-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats