File Download
  Links for fulltext
     (May Require Subscription)
  • Find via Find It@HKUL
Supplementary

Conference Paper: POPQORN: Quantifying Robustness of Recurrent Neural Networks

TitlePOPQORN: Quantifying Robustness of Recurrent Neural Networks
Authors
Issue Date2019
PublisherPMLR. The Journal's web site is located at http://proceedings.mlr.press/
Citation
Proceedings of the 36th International Conference on Machine Learning (PMLR), Long Beach, California, USA, 9-15 June 2019. In Proceedings of Machine Learning Research (PMLR), 2019, v. 97, p. 3468-3477 How to Cite?
AbstractThe vulnerability to adversarial attacks has been a critical issue for deep neural networks. Addressing this issue requires a reliable way to evaluate the robustness of a network. Recently, several methods have been developed to compute robustness quantification for neural networks, namely, certified lower bounds of the minimum adversarial perturbation. Such methods, however, were devised for feed-forward networks, e.g. multi-layer perceptron or convolutional networks. It remains an open problem to quantify robustness for recurrent networks, especially LSTM and GRU. For such networks, there exist additional challenges in computing the robustness quantification, such as handling the inputs at multiple steps and the interaction between gates and states. In this work, we propose POPQORN (Propagated-output Quantified Robustness for RNNs), a general algorithm to quantify robustness of RNNs, including vanilla RNNs, LSTMs, and GRUs. We demonstrate its effectiveness on different network architectures and show that the robustness quantification on individual steps can lead to new insights.
Persistent Identifierhttp://hdl.handle.net/10722/275278
ISSN

 

DC FieldValueLanguage
dc.contributor.authorKo, CY-
dc.contributor.authorLyu, Z-
dc.contributor.authorWeng, TW-
dc.contributor.authorDaniel, L-
dc.contributor.authorWong, N-
dc.contributor.authorLin, D-
dc.date.accessioned2019-09-10T02:39:17Z-
dc.date.available2019-09-10T02:39:17Z-
dc.date.issued2019-
dc.identifier.citationProceedings of the 36th International Conference on Machine Learning (PMLR), Long Beach, California, USA, 9-15 June 2019. In Proceedings of Machine Learning Research (PMLR), 2019, v. 97, p. 3468-3477-
dc.identifier.issn2640-3498-
dc.identifier.urihttp://hdl.handle.net/10722/275278-
dc.description.abstractThe vulnerability to adversarial attacks has been a critical issue for deep neural networks. Addressing this issue requires a reliable way to evaluate the robustness of a network. Recently, several methods have been developed to compute robustness quantification for neural networks, namely, certified lower bounds of the minimum adversarial perturbation. Such methods, however, were devised for feed-forward networks, e.g. multi-layer perceptron or convolutional networks. It remains an open problem to quantify robustness for recurrent networks, especially LSTM and GRU. For such networks, there exist additional challenges in computing the robustness quantification, such as handling the inputs at multiple steps and the interaction between gates and states. In this work, we propose POPQORN (Propagated-output Quantified Robustness for RNNs), a general algorithm to quantify robustness of RNNs, including vanilla RNNs, LSTMs, and GRUs. We demonstrate its effectiveness on different network architectures and show that the robustness quantification on individual steps can lead to new insights.-
dc.languageeng-
dc.publisherPMLR. The Journal's web site is located at http://proceedings.mlr.press/-
dc.relation.ispartofProceedings of Machine Learning Research (PMLR)-
dc.relation.ispartofInternational Conference on Machine Learning (ICML)-
dc.titlePOPQORN: Quantifying Robustness of Recurrent Neural Networks-
dc.typeConference_Paper-
dc.identifier.emailWong, N: nwong@eee.hku.hk-
dc.identifier.authorityWong, N=rp00190-
dc.description.naturepublished_or_final_version-
dc.identifier.hkuros304917-
dc.identifier.volume97-
dc.identifier.spage3468-
dc.identifier.epage3477-
dc.publisher.placeUnited States-
dc.identifier.issnl2640-3498-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats