File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Towards Building More Robust Models with Frequency Bias

TitleTowards Building More Robust Models with Frequency Bias
Authors
Issue Date6-Oct-2023
Abstract

The vulnerability of deep neural networks to adversarial samples has been a major impediment to their broad applications, despite their success in various fields. Recently, some works suggested that adversarially-trained models emphasize the importance of low-frequency information to achieve higher robustness. While several attempts have been made to leverage this frequency characteristic, they have all faced the issue that applying low-pass filters directly to input images leads to irreversible loss of discriminative information and poor generalizability to datasets with distinct frequency features. This paper presents a plug-and-play module called the Frequency Preference Control Module that adaptively reconfigures the low- and high-frequency components of intermediate feature representations, providing better utilization of frequency in robust learning. Empirical studies show that our proposed module can be easily incorporated into any adversarial training framework, further improving model robustness across different architectures and datasets. Additionally, experiments were conducted to examine how the frequency bias of robust models impacts the adversarial training process and its final robustness, revealing interesting insights.


Persistent Identifierhttp://hdl.handle.net/10722/333864

 

DC FieldValueLanguage
dc.contributor.authorBu, Qingwen-
dc.contributor.authorHuang, Dong-
dc.contributor.authorCui, Heming-
dc.date.accessioned2023-10-06T08:39:43Z-
dc.date.available2023-10-06T08:39:43Z-
dc.date.issued2023-10-06-
dc.identifier.urihttp://hdl.handle.net/10722/333864-
dc.description.abstract<p>The vulnerability of deep neural networks to adversarial samples has been a major impediment to their broad applications, despite their success in various fields. Recently, some works suggested that adversarially-trained models emphasize the importance of low-frequency information to achieve higher robustness. While several attempts have been made to leverage this frequency characteristic, they have all faced the issue that applying low-pass filters directly to input images leads to irreversible loss of discriminative information and poor generalizability to datasets with distinct frequency features. This paper presents a plug-and-play module called the Frequency Preference Control Module that adaptively reconfigures the low- and high-frequency components of intermediate feature representations, providing better utilization of frequency in robust learning. Empirical studies show that our proposed module can be easily incorporated into any adversarial training framework, further improving model robustness across different architectures and datasets. Additionally, experiments were conducted to examine how the frequency bias of robust models impacts the adversarial training process and its final robustness, revealing interesting insights.<br></p>-
dc.languageeng-
dc.relation.ispartofIEEE International Conference on Computer Vision 2023 (02/10/2023-06/10/2023, Paris)-
dc.titleTowards Building More Robust Models with Frequency Bias-
dc.typeConference_Paper-
dc.identifier.doi10.48550/arXiv.2307.09763-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats