File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/978-3-030-58517-4_15
- Scopus: eid_2-s2.0-85092906267
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Yet Another Intermediate-Level Attack
Title | Yet Another Intermediate-Level Attack |
---|---|
Authors | |
Keywords | Adversarial examples Feature maps Transferability |
Issue Date | 2020 |
Citation | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2020, v. 12361 LNCS, p. 241-257 How to Cite? |
Abstract | The transferability of adversarial examples across deep neural network (DNN) models is the crux of a spectrum of black-box attacks. In this paper, we propose a novel method to enhance the black-box transferability of baseline adversarial examples. By establishing a linear mapping of the intermediate-level discrepancies (between a set of adversarial inputs and their benign counterparts) for predicting the evoked adversarial loss, we aim to take full advantage of the optimization procedure of mulch-step baseline attacks. We conducted extensive experiments to verify the effectiveness of our method on CIFAR-100 and ImageNet. Experimental results demonstrate that it outperforms previous state-of-the-arts considerably. Our code is at https://github.com/qizhangli/ila-plus-plus. |
Persistent Identifier | http://hdl.handle.net/10722/346898 |
ISSN | 2023 SCImago Journal Rankings: 0.606 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Li, Qizhang | - |
dc.contributor.author | Guo, Yiwen | - |
dc.contributor.author | Chen, Hao | - |
dc.date.accessioned | 2024-09-17T04:14:02Z | - |
dc.date.available | 2024-09-17T04:14:02Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2020, v. 12361 LNCS, p. 241-257 | - |
dc.identifier.issn | 0302-9743 | - |
dc.identifier.uri | http://hdl.handle.net/10722/346898 | - |
dc.description.abstract | The transferability of adversarial examples across deep neural network (DNN) models is the crux of a spectrum of black-box attacks. In this paper, we propose a novel method to enhance the black-box transferability of baseline adversarial examples. By establishing a linear mapping of the intermediate-level discrepancies (between a set of adversarial inputs and their benign counterparts) for predicting the evoked adversarial loss, we aim to take full advantage of the optimization procedure of mulch-step baseline attacks. We conducted extensive experiments to verify the effectiveness of our method on CIFAR-100 and ImageNet. Experimental results demonstrate that it outperforms previous state-of-the-arts considerably. Our code is at https://github.com/qizhangli/ila-plus-plus. | - |
dc.language | eng | - |
dc.relation.ispartof | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | - |
dc.subject | Adversarial examples | - |
dc.subject | Feature maps | - |
dc.subject | Transferability | - |
dc.title | Yet Another Intermediate-Level Attack | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1007/978-3-030-58517-4_15 | - |
dc.identifier.scopus | eid_2-s2.0-85092906267 | - |
dc.identifier.volume | 12361 LNCS | - |
dc.identifier.spage | 241 | - |
dc.identifier.epage | 257 | - |
dc.identifier.eissn | 1611-3349 | - |