File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: An Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning

TitleAn Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning
Authors
KeywordsByzantine attacks
Computational modeling
distributed learning
federated learning
Federated learning
neural networks
Optimization
Performance evaluation
robustness
Robustness
Servers
Training
Issue Date16-Jan-2023
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Big Data, 2023 How to Cite?
Abstract

Byzantine-robust federated learning aims at mitigating Byzantine failures during the federated training process, where malicious participants (known as Byzantine clients) may upload arbitrary local updates to the central server in order to degrade the performance of the global model. In recent years, several robust aggregation schemes have been proposed to defend against malicious updates from Byzantine clients and improve the robustness of federated learning. These solutions were claimed to be Byzantine-robust, under certain assumptions. Other than that, new attack strategies are emerging, striving to circumvent the defense schemes. However, there is a lack of systematical comparison and empirical study thereof. In this paper, we conduct an experimental study of Byzantine-robust aggregation schemes under different attacks using two popular algorithms in federated learning, and . We first survey existing Byzantine attack strategies, as well as Byzantine-robust aggregation schemes that aim to defend against Byzantine attacks. We also propose a new scheme, , to enhance the robustness of a clustering-based scheme by automatically clipping the updates. Then we provide an experimental evaluation of eight aggregation schemes in the scenario of five different Byzantine attacks. Our experimental results show that these aggregation schemes sustain relatively high accuracy in some cases, but they are not effective in all cases. In particular, our proposed successfully defends against most attacks under independent and identically distributed (IID) local datasets. However, when the local datasets are Non-IID, the performance of all the aggregation schemes significantly decreases. With Non-IID data, some of these aggregation schemes fail even in the complete absence of Byzantine clients. Based on our experimental study, we conclude that the robustness of all the aggregation schemes is limited, highlighting the need for new defense strategies, in particular for Non-IID datasets. IEEE


Persistent Identifierhttp://hdl.handle.net/10722/331384
ISSN
2023 Impact Factor: 7.5
2023 SCImago Journal Rankings: 1.821

 

DC FieldValueLanguage
dc.contributor.authorLi, S-
dc.contributor.authorNgai, EC-
dc.contributor.authorVoigt, T -
dc.date.accessioned2023-09-21T06:55:15Z-
dc.date.available2023-09-21T06:55:15Z-
dc.date.issued2023-01-16-
dc.identifier.citationIEEE Transactions on Big Data, 2023-
dc.identifier.issn2332-7790-
dc.identifier.urihttp://hdl.handle.net/10722/331384-
dc.description.abstract<p>Byzantine-robust federated learning aims at mitigating Byzantine failures during the federated training process, where malicious participants (known as Byzantine clients) may upload arbitrary local updates to the central server in order to degrade the performance of the global model. In recent years, several robust aggregation schemes have been proposed to defend against malicious updates from Byzantine clients and improve the robustness of federated learning. These solutions were claimed to be Byzantine-robust, under certain assumptions. Other than that, new attack strategies are emerging, striving to circumvent the defense schemes. However, there is a lack of systematical comparison and empirical study thereof. In this paper, we conduct an experimental study of Byzantine-robust aggregation schemes under different attacks using two popular algorithms in federated learning, and . We first survey existing Byzantine attack strategies, as well as Byzantine-robust aggregation schemes that aim to defend against Byzantine attacks. We also propose a new scheme, , to enhance the robustness of a clustering-based scheme by automatically clipping the updates. Then we provide an experimental evaluation of eight aggregation schemes in the scenario of five different Byzantine attacks. Our experimental results show that these aggregation schemes sustain relatively high accuracy in some cases, but they are not effective in all cases. In particular, our proposed successfully defends against most attacks under independent and identically distributed (IID) local datasets. However, when the local datasets are Non-IID, the performance of all the aggregation schemes significantly decreases. With Non-IID data, some of these aggregation schemes fail even in the complete absence of Byzantine clients. Based on our experimental study, we conclude that the robustness of all the aggregation schemes is limited, highlighting the need for new defense strategies, in particular for Non-IID datasets. IEEE<br></p>-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Big Data-
dc.subjectByzantine attacks-
dc.subjectComputational modeling-
dc.subjectdistributed learning-
dc.subjectfederated learning-
dc.subjectFederated learning-
dc.subjectneural networks-
dc.subjectOptimization-
dc.subjectPerformance evaluation-
dc.subjectrobustness-
dc.subjectRobustness-
dc.subjectServers-
dc.subjectTraining-
dc.titleAn Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning-
dc.typeArticle-
dc.identifier.doi10.1109/TBDATA.2023.3237397-
dc.identifier.scopuseid_2-s2.0-85147301735-
dc.identifier.eissn2332-7790-
dc.identifier.issnl2332-7790-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats