File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks

TitleDenoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks
Authors
Keywordsadversarial deep learning
ensemble defense
ensemble diversity
robustness
Issue Date2019
Citation
Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019, 2019, p. 1282-1291 How to Cite?
AbstractDeep neural networks (DNNs) have demonstrated impressive performance on many challenging machine learning tasks. However, DNNs are vulnerable to adversarial inputs generated by adding maliciously crafted perturbations to the benign inputs. As a growing number of attacks have been reported to generate adversarial inputs of varying sophistication, the defense-attack arms race has been accelerated. In this paper, we present MODEF, a cross-layer model diversity ensemble framework. MODEF intelligently combines unsupervised model denoising ensemble with supervised model verification ensemble by quantifying model diversity, aiming to boost the robustness of the target model against adversarial examples. Evaluated using eleven representative attacks on popular benchmark datasets, we show that MODEF achieves remarkable defense success rates, compared with existing defense methods, and provides a superior capability of repairing adversarial inputs and making correct predictions with high accuracy in the presence of black-box attacks.
Persistent Identifierhttp://hdl.handle.net/10722/343297

 

DC FieldValueLanguage
dc.contributor.authorChow, Ka Ho-
dc.contributor.authorWei, Wenqi-
dc.contributor.authorWu, Yanzhao-
dc.contributor.authorLiu, Ling-
dc.date.accessioned2024-05-10T09:07:00Z-
dc.date.available2024-05-10T09:07:00Z-
dc.date.issued2019-
dc.identifier.citationProceedings - 2019 IEEE International Conference on Big Data, Big Data 2019, 2019, p. 1282-1291-
dc.identifier.urihttp://hdl.handle.net/10722/343297-
dc.description.abstractDeep neural networks (DNNs) have demonstrated impressive performance on many challenging machine learning tasks. However, DNNs are vulnerable to adversarial inputs generated by adding maliciously crafted perturbations to the benign inputs. As a growing number of attacks have been reported to generate adversarial inputs of varying sophistication, the defense-attack arms race has been accelerated. In this paper, we present MODEF, a cross-layer model diversity ensemble framework. MODEF intelligently combines unsupervised model denoising ensemble with supervised model verification ensemble by quantifying model diversity, aiming to boost the robustness of the target model against adversarial examples. Evaluated using eleven representative attacks on popular benchmark datasets, we show that MODEF achieves remarkable defense success rates, compared with existing defense methods, and provides a superior capability of repairing adversarial inputs and making correct predictions with high accuracy in the presence of black-box attacks.-
dc.languageeng-
dc.relation.ispartofProceedings - 2019 IEEE International Conference on Big Data, Big Data 2019-
dc.subjectadversarial deep learning-
dc.subjectensemble defense-
dc.subjectensemble diversity-
dc.subjectrobustness-
dc.titleDenoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/BigData47090.2019.9006090-
dc.identifier.scopuseid_2-s2.0-85081381101-
dc.identifier.spage1282-
dc.identifier.epage1291-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats