File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: MagNet: A Two-Pronged defense against adversarial examples

TitleMagNet: A Two-Pronged defense against adversarial examples
Authors
Issue Date2017
Citation
Proceedings of the ACM Conference on Computer and Communications Security, 2017, p. 135-147 How to Cite?
AbstractDeep learning has shown impressive performance on hard percep-tual problems. However, researchers found deep learning systems to be vulnerable to small, specially crafted perturbations that are imperceptible to humans. Such perturbations cause deep learning systems to mis-classify adversarial examples, with potentially dis-A strous consequences where safety or security is crucial. Prior de-fenses against adversarial examples either targeted specific attacks or were shown to be ineffective. We propose MagNet, a framework for defending neural network classifiers against adversarial examples. MagNet neither modifies the protected classifier nor requires knowledge of the process for generating adversarial examples. MagNet includes one or more separate detector networks and a reformer network. The detector networks learn to differentiate between normal and adversarial ex-A mples by approximating the manifold of normal examples. Since they assume no specific process for generating adversarial exam-ples, they generalize well. The reformer network moves adversar-ial examples towards the manifold of normal examples, which is effective for correctly classifying adversarial examples with small perturbation.We discuss the intrinsic difficulties in defending against whitebox attack and propose a mechanism to defend against gray-box attack. Inspired by the use of randomness in cryptography, we use diversity to strengthen MagNet. We show empirically that MagNet is effective against the most advanced state-of-the-art at-tacks in blackbox and graybox scenarios without sacrificing false positive rate on normal examples.
Persistent Identifierhttp://hdl.handle.net/10722/346656
ISSN
2023 SCImago Journal Rankings: 1.430

 

DC FieldValueLanguage
dc.contributor.authorMeng, Dongyu-
dc.contributor.authorChen, Hao-
dc.date.accessioned2024-09-17T04:12:22Z-
dc.date.available2024-09-17T04:12:22Z-
dc.date.issued2017-
dc.identifier.citationProceedings of the ACM Conference on Computer and Communications Security, 2017, p. 135-147-
dc.identifier.issn1543-7221-
dc.identifier.urihttp://hdl.handle.net/10722/346656-
dc.description.abstractDeep learning has shown impressive performance on hard percep-tual problems. However, researchers found deep learning systems to be vulnerable to small, specially crafted perturbations that are imperceptible to humans. Such perturbations cause deep learning systems to mis-classify adversarial examples, with potentially dis-A strous consequences where safety or security is crucial. Prior de-fenses against adversarial examples either targeted specific attacks or were shown to be ineffective. We propose MagNet, a framework for defending neural network classifiers against adversarial examples. MagNet neither modifies the protected classifier nor requires knowledge of the process for generating adversarial examples. MagNet includes one or more separate detector networks and a reformer network. The detector networks learn to differentiate between normal and adversarial ex-A mples by approximating the manifold of normal examples. Since they assume no specific process for generating adversarial exam-ples, they generalize well. The reformer network moves adversar-ial examples towards the manifold of normal examples, which is effective for correctly classifying adversarial examples with small perturbation.We discuss the intrinsic difficulties in defending against whitebox attack and propose a mechanism to defend against gray-box attack. Inspired by the use of randomness in cryptography, we use diversity to strengthen MagNet. We show empirically that MagNet is effective against the most advanced state-of-the-art at-tacks in blackbox and graybox scenarios without sacrificing false positive rate on normal examples.-
dc.languageeng-
dc.relation.ispartofProceedings of the ACM Conference on Computer and Communications Security-
dc.titleMagNet: A Two-Pronged defense against adversarial examples-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1145/3133956.3134057-
dc.identifier.scopuseid_2-s2.0-85038925130-
dc.identifier.spage135-
dc.identifier.epage147-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats