File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Global Convergence Guarantees for Federated Policy Gradient Methods with Adversaries

TitleGlobal Convergence Guarantees for Federated Policy Gradient Methods with Adversaries
Authors
Issue Date2024
Citation
Transactions on Machine Learning Research, 2024, v. 2024 How to Cite?
AbstractFederated Reinforcement Learning (FRL) allows multiple agents to collaboratively build a decision making policy without sharing raw trajectories. However, if a small fraction of these agents are adversarial, it can lead to catastrophic results. We propose a policy gradient based approach that is robust to adversarial agents which can send arbitrary values to the server. Under this setting, our results form the first global convergence guarantees with general parametrization. These results demonstrate(resilience)) with adversaries, while achieving optimal sample complexity of order [Formula In Abstract], where N is the total number of agents and f < N/2 is the number of adversarial agents.
Persistent Identifierhttp://hdl.handle.net/10722/361832

 

DC FieldValueLanguage
dc.contributor.authorGanesh, Swetha-
dc.contributor.authorChen, Jiayu-
dc.contributor.authorThoppe, Gugan-
dc.contributor.authorAggarwal, Vaneet-
dc.date.accessioned2025-09-16T04:21:21Z-
dc.date.available2025-09-16T04:21:21Z-
dc.date.issued2024-
dc.identifier.citationTransactions on Machine Learning Research, 2024, v. 2024-
dc.identifier.urihttp://hdl.handle.net/10722/361832-
dc.description.abstractFederated Reinforcement Learning (FRL) allows multiple agents to collaboratively build a decision making policy without sharing raw trajectories. However, if a small fraction of these agents are adversarial, it can lead to catastrophic results. We propose a policy gradient based approach that is robust to adversarial agents which can send arbitrary values to the server. Under this setting, our results form the first global convergence guarantees with general parametrization. These results demonstrate(resilience)) with adversaries, while achieving optimal sample complexity of order [Formula In Abstract], where N is the total number of agents and f < N/2 is the number of adversarial agents.-
dc.languageeng-
dc.relation.ispartofTransactions on Machine Learning Research-
dc.titleGlobal Convergence Guarantees for Federated Policy Gradient Methods with Adversaries-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85217913772-
dc.identifier.volume2024-
dc.identifier.eissn2835-8856-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats