File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Improving Adversarial Transferability via Intermediate-level Perturbation Decay
Title | Improving Adversarial Transferability via Intermediate-level Perturbation Decay |
---|---|
Authors | |
Issue Date | 2023 |
Citation | Advances in Neural Information Processing Systems, 2023, v. 36 How to Cite? |
Abstract | Intermediate-level attacks that attempt to perturb feature representations following an adversarial direction drastically have shown favorable performance in crafting transferable adversarial examples. Existing methods in this category are normally formulated with two separate stages, where a directional guide is required to be determined at first and the scalar projection of the intermediate-level perturbation onto the directional guide is enlarged thereafter. The obtained perturbation deviates from the guide inevitably in the feature space, and it is revealed in this paper that such a deviation may lead to sub-optimal attack. To address this issue, we develop a novel intermediate-level method that crafts adversarial examples within a single stage of optimization. In particular, the proposed method, named intermediate-level perturbation decay (ILPD), encourages the intermediate-level perturbation to be in an effective adversarial direction and to possess a great magnitude simultaneously. In-depth discussion verifies the effectiveness of our method. Experimental results show that it outperforms state-of-the-arts by large margins in attacking various victim models on ImageNet (+10.07% on average) and CIFAR-10 (+3.88% on average). Our code is at https://github.com/qizhangli/ILPD-attack. |
Persistent Identifier | http://hdl.handle.net/10722/347088 |
ISSN | 2020 SCImago Journal Rankings: 1.399 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Li, Qizhang | - |
dc.contributor.author | Guo, Yiwen | - |
dc.contributor.author | Zuo, Wangmeng | - |
dc.contributor.author | Chen, Hao | - |
dc.date.accessioned | 2024-09-17T04:15:18Z | - |
dc.date.available | 2024-09-17T04:15:18Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | Advances in Neural Information Processing Systems, 2023, v. 36 | - |
dc.identifier.issn | 1049-5258 | - |
dc.identifier.uri | http://hdl.handle.net/10722/347088 | - |
dc.description.abstract | Intermediate-level attacks that attempt to perturb feature representations following an adversarial direction drastically have shown favorable performance in crafting transferable adversarial examples. Existing methods in this category are normally formulated with two separate stages, where a directional guide is required to be determined at first and the scalar projection of the intermediate-level perturbation onto the directional guide is enlarged thereafter. The obtained perturbation deviates from the guide inevitably in the feature space, and it is revealed in this paper that such a deviation may lead to sub-optimal attack. To address this issue, we develop a novel intermediate-level method that crafts adversarial examples within a single stage of optimization. In particular, the proposed method, named intermediate-level perturbation decay (ILPD), encourages the intermediate-level perturbation to be in an effective adversarial direction and to possess a great magnitude simultaneously. In-depth discussion verifies the effectiveness of our method. Experimental results show that it outperforms state-of-the-arts by large margins in attacking various victim models on ImageNet (+10.07% on average) and CIFAR-10 (+3.88% on average). Our code is at https://github.com/qizhangli/ILPD-attack. | - |
dc.language | eng | - |
dc.relation.ispartof | Advances in Neural Information Processing Systems | - |
dc.title | Improving Adversarial Transferability via Intermediate-level Perturbation Decay | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.scopus | eid_2-s2.0-85180066163 | - |
dc.identifier.volume | 36 | - |