File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: TOWARDS EVALUATING THE ROBUSTNESS OF NEURAL NETWORKS LEARNED BY TRANSDUCTION
Title | TOWARDS EVALUATING THE ROBUSTNESS OF NEURAL NETWORKS LEARNED BY TRANSDUCTION |
---|---|
Authors | |
Issue Date | 2022 |
Citation | ICLR 2022 - 10th International Conference on Learning Representations, 2022 How to Cite? |
Abstract | There has been an emerging interest in using transductive learning for adversarial robustness (Goldwasser et al., NeurIPS 2020; Wu et al., ICML 2020; Wang et al., ArXiv 2021). Compared to traditional defenses, these defense mechanisms “dynamically learn” the model based on test-time input; and theoretically, attacking these defenses reduces to solving a bilevel optimization problem, which poses difficulty in crafting adaptive attacks. In this paper, we examine these defense mechanisms from a principled threat analysis perspective. We formulate and analyze threat models for transductive-learning based defenses, and point out important subtleties. We propose the principle of attacking model space for solving bilevel attack objectives, and present Greedy Model Space Attack (GMSA), an attack framework that can serve as a new baseline for evaluating transductive-learning based defenses. Through systematic evaluation, we show that GMSA, even with weak instantiations, can break previous transductive-learning based defenses, which were resilient to previous attacks, such as AutoAttack. On the positive side, we report a somewhat surprising empirical result of “transductive adversarial training”: Adversarially retraining the model using fresh randomness at the test time gives a significant increase in robustness against attacks we consider. |
Persistent Identifier | http://hdl.handle.net/10722/341368 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, Jiefeng | - |
dc.contributor.author | Wu, Xi | - |
dc.contributor.author | Guo, Yang | - |
dc.contributor.author | Liang, Yingyu | - |
dc.contributor.author | Jha, Somesh | - |
dc.date.accessioned | 2024-03-13T08:42:16Z | - |
dc.date.available | 2024-03-13T08:42:16Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | ICLR 2022 - 10th International Conference on Learning Representations, 2022 | - |
dc.identifier.uri | http://hdl.handle.net/10722/341368 | - |
dc.description.abstract | There has been an emerging interest in using transductive learning for adversarial robustness (Goldwasser et al., NeurIPS 2020; Wu et al., ICML 2020; Wang et al., ArXiv 2021). Compared to traditional defenses, these defense mechanisms “dynamically learn” the model based on test-time input; and theoretically, attacking these defenses reduces to solving a bilevel optimization problem, which poses difficulty in crafting adaptive attacks. In this paper, we examine these defense mechanisms from a principled threat analysis perspective. We formulate and analyze threat models for transductive-learning based defenses, and point out important subtleties. We propose the principle of attacking model space for solving bilevel attack objectives, and present Greedy Model Space Attack (GMSA), an attack framework that can serve as a new baseline for evaluating transductive-learning based defenses. Through systematic evaluation, we show that GMSA, even with weak instantiations, can break previous transductive-learning based defenses, which were resilient to previous attacks, such as AutoAttack. On the positive side, we report a somewhat surprising empirical result of “transductive adversarial training”: Adversarially retraining the model using fresh randomness at the test time gives a significant increase in robustness against attacks we consider. | - |
dc.language | eng | - |
dc.relation.ispartof | ICLR 2022 - 10th International Conference on Learning Representations | - |
dc.title | TOWARDS EVALUATING THE ROBUSTNESS OF NEURAL NETWORKS LEARNED BY TRANSDUCTION | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.scopus | eid_2-s2.0-85135774521 | - |