File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3374664.3375728
- Scopus: eid_2-s2.0-85083372033
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Explore the Transformation Space for Adversarial Images
Title | Explore the Transformation Space for Adversarial Images |
---|---|
Authors | |
Keywords | adversarial attacks deep learning security image transformation |
Issue Date | 2020 |
Citation | CODASPY 2020 - Proceedings of the 10th ACM Conference on Data and Application Security and Privacy, 2020, p. 109-120 How to Cite? |
Abstract | Deep learning models are vulnerable to adversarial examples. Most of current adversarial attacks add pixel-wise perturbations restricted to some Lp-norm, and defense models are evaluated also on adversarial examples restricted inside Lp-norm balls. However, we wish to explore adversarial examples exist beyond Lp-norm balls and their implications for attacks and defenses. In this paper, we focus on adversarial images generated by transformations. We start with color transformation and propose two gradient-based attacks. Since Lp-norm is inappropriate for measuring image quality in the transformation space, we use the similarity between transformations and the Structural Similarity Index. Next, we explore a larger transformation space consisting of combinations of color and affine transformations. We evaluate our transformation attacks on three data sets - CIFAR10, SVHN, and ImageNet - and their corresponding models. Finally, we perform retraining defenses to evaluate the strength of our attacks. The results show that transformation attacks are powerful. They find high-quality adversarial images that have higher transferability and misclassification rates than C&W's Lp attacks, especially at high confidence levels. They are also significantly harder to defend against by retraining than C&W's Lp attacks. More importantly, exploring different attack spaces makes it more challenging to train a universally robust model. |
Persistent Identifier | http://hdl.handle.net/10722/346774 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, Jiyu | - |
dc.contributor.author | Wang, David | - |
dc.contributor.author | Chen, Hao | - |
dc.date.accessioned | 2024-09-17T04:13:12Z | - |
dc.date.available | 2024-09-17T04:13:12Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | CODASPY 2020 - Proceedings of the 10th ACM Conference on Data and Application Security and Privacy, 2020, p. 109-120 | - |
dc.identifier.uri | http://hdl.handle.net/10722/346774 | - |
dc.description.abstract | Deep learning models are vulnerable to adversarial examples. Most of current adversarial attacks add pixel-wise perturbations restricted to some Lp-norm, and defense models are evaluated also on adversarial examples restricted inside Lp-norm balls. However, we wish to explore adversarial examples exist beyond Lp-norm balls and their implications for attacks and defenses. In this paper, we focus on adversarial images generated by transformations. We start with color transformation and propose two gradient-based attacks. Since Lp-norm is inappropriate for measuring image quality in the transformation space, we use the similarity between transformations and the Structural Similarity Index. Next, we explore a larger transformation space consisting of combinations of color and affine transformations. We evaluate our transformation attacks on three data sets - CIFAR10, SVHN, and ImageNet - and their corresponding models. Finally, we perform retraining defenses to evaluate the strength of our attacks. The results show that transformation attacks are powerful. They find high-quality adversarial images that have higher transferability and misclassification rates than C&W's Lp attacks, especially at high confidence levels. They are also significantly harder to defend against by retraining than C&W's Lp attacks. More importantly, exploring different attack spaces makes it more challenging to train a universally robust model. | - |
dc.language | eng | - |
dc.relation.ispartof | CODASPY 2020 - Proceedings of the 10th ACM Conference on Data and Application Security and Privacy | - |
dc.subject | adversarial attacks | - |
dc.subject | deep learning security | - |
dc.subject | image transformation | - |
dc.title | Explore the Transformation Space for Adversarial Images | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1145/3374664.3375728 | - |
dc.identifier.scopus | eid_2-s2.0-85083372033 | - |
dc.identifier.spage | 109 | - |
dc.identifier.epage | 120 | - |