File Download

There are no files associated with this item.

Supplementary

Conference Paper: On the Adversarial Robustness of Graph Neural Networks with Graph Reduction

TitleOn the Adversarial Robustness of Graph Neural Networks with Graph Reduction
Authors
Issue Date22-Sep-2025
Abstract

As Graph Neural Networks (GNNs) become increasingly popular for learning from large-scale graph data across various domains, their susceptibility to adversarial attacks when using graph reduction techniques for scalability remains underexplored. In this paper, we present an extensive empirical study to investigate the impact of graph reduction techniques, specifically graph coarsening and sparsification, on the robustness of GNNs against adversarial attacks. Through extensive experiments involving multiple datasets and GNN architectures, we examine the effects of four sparsification and six coarsening methods on the poisoning attacks. Our results indicate that, while graph sparsification can mitigate the effectiveness of certain poisoning attacks, such as Mettack, it has limited impact on others, like PGD. Conversely, graph coarsening tends to amplify the adversarial impact, significantly reducing classification accuracy as the reduction ratio decreases. Additionally, we provide a novel analysis of the causes driving these effects and examine how defensive GNN models perform under graph reduction, offering practical insights for designing robust GNNs within graph acceleration systems.


Persistent Identifierhttp://hdl.handle.net/10722/358772

 

DC FieldValueLanguage
dc.contributor.authorWu, Kerui-
dc.contributor.authorChow, Ka-Ho-
dc.contributor.authorWei, Wenqi-
dc.contributor.authorYu, Lei-
dc.date.accessioned2025-08-13T07:47:56Z-
dc.date.available2025-08-13T07:47:56Z-
dc.date.issued2025-09-22-
dc.identifier.urihttp://hdl.handle.net/10722/358772-
dc.description.abstract<p>As Graph Neural Networks (GNNs) become increasingly popular for learning from large-scale graph data across various domains, their susceptibility to adversarial attacks when using graph reduction techniques for scalability remains underexplored. In this paper, we present an extensive empirical study to investigate the impact of graph reduction techniques, specifically graph coarsening and sparsification, on the robustness of GNNs against adversarial attacks. Through extensive experiments involving multiple datasets and GNN architectures, we examine the effects of four sparsification and six coarsening methods on the poisoning attacks. Our results indicate that, while graph sparsification can mitigate the effectiveness of certain poisoning attacks, such as Mettack, it has limited impact on others, like PGD. Conversely, graph coarsening tends to amplify the adversarial impact, significantly reducing classification accuracy as the reduction ratio decreases. Additionally, we provide a novel analysis of the causes driving these effects and examine how defensive GNN models perform under graph reduction, offering practical insights for designing robust GNNs within graph acceleration systems.<br></p>-
dc.languageeng-
dc.relation.ispartofEuropean Symposium on Research in Computer Security (ESORICS) (22/09/2025-26/09/2025, Toulouse)-
dc.titleOn the Adversarial Robustness of Graph Neural Networks with Graph Reduction-
dc.typeConference_Paper-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats