File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: ODG-Q: Robust Quantization via Online Domain Generalization

TitleODG-Q: Robust Quantization via Online Domain Generalization
Authors
Issue Date21-Aug-2022
PublisherIEEE
Abstract

Quantizing neural networks to low-bitwidth is important for model deployment on resource-limited edge hardware. Although a quantized network has a smaller model size and memory footprint, it is fragile to adversarial attacks. However, few methods study the robustness and training efficiency of quantized networks. To this end, we propose a new method by recasting robust quantization as an online domain generalization problem, termed ODG-Q, which generates diverse adversarial data at a low cost during training. ODG-Q consistently outperforms existing works against various adversarial attacks. For example, on CIFAR-10 dataset, ODG-Q achieves 49.2% average improvements under five common white-box attacks and 21.7% average improvements under five common black-box attacks, with a training cost similar to that of natural training (viz. without adversaries). To our best knowledge, this work is the first work that trains both quantized and binary neural networks on ImageNet that consistently improves robustness under different attacks. We also provide a theoretical insight of ODG-Q that accounts for the bound of model risk on attacked data.


Persistent Identifierhttp://hdl.handle.net/10722/337554
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorTao, Chaofan-
dc.contributor.authorWong, Ngai-
dc.date.accessioned2024-03-11T10:21:48Z-
dc.date.available2024-03-11T10:21:48Z-
dc.date.issued2022-08-21-
dc.identifier.urihttp://hdl.handle.net/10722/337554-
dc.description.abstract<p>Quantizing neural networks to low-bitwidth is important for model deployment on resource-limited edge hardware. Although a quantized network has a smaller model size and memory footprint, it is fragile to adversarial attacks. However, few methods study the robustness and training efficiency of quantized networks. To this end, we propose a new method by recasting robust quantization as an online domain generalization problem, termed ODG-Q, which generates diverse adversarial data at a low cost during training. ODG-Q consistently outperforms existing works against various adversarial attacks. For example, on CIFAR-10 dataset, ODG-Q achieves 49.2% average improvements under five common white-box attacks and 21.7% average improvements under five common black-box attacks, with a training cost similar to that of natural training (viz. without adversaries). To our best knowledge, this work is the first work that trains both quantized and binary neural networks on ImageNet that consistently improves robustness under different attacks. We also provide a theoretical insight of ODG-Q that accounts for the bound of model risk on attacked data.<br></p>-
dc.languageeng-
dc.publisherIEEE-
dc.relation.ispartof26th International Conference on Pattern Recognition, ICPR2022 (21/08/2022-25/08/2022, Montreal, Quebec)-
dc.titleODG-Q: Robust Quantization via Online Domain Generalization-
dc.typeConference_Paper-
dc.identifier.doi10.1109/ICPR56361.2022.9956164-
dc.identifier.scopuseid_2-s2.0-85143589373-
dc.identifier.volume2022-August-
dc.identifier.spage1822-
dc.identifier.epage1828-
dc.identifier.isiWOS:000897707601116-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats