File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Learning Multiagent Options for Tabular Reinforcement Learning using Factor Graphs

TitleLearning Multiagent Options for Tabular Reinforcement Learning using Factor Graphs
Authors
KeywordsKronecker product
multiagent reinforcement learning (MARL)
option discovery
Issue Date2023
Citation
IEEE Transactions on Artificial Intelligence, 2023, v. 4, n. 5, p. 1141-1153 How to Cite?
AbstractCovering option discovery has been developed to improve the exploration of reinforcement learning in single-agent scenarios, where only sparse reward signals are available. It aims to connect the most distant states identified through the Fiedler vector of the state transition graph. However, the approach cannot be directly extended to multiagent scenarios, since the joint state space grows exponentially with the number of agents, thus prohibiting efficient option computation. Existing research adopting options in multiagent scenarios still relies on single-agent algorithms and fails to directly discover joint options that can improve the connectivity of the joint state space. In this article, we propose a new algorithm to directly compute multiagent options with collaborative exploratory behaviors while still enjoying the ease of decomposition. Our key idea is to approximate the joint state space as the Kronecker product of individual agents' state spaces, based on which we can directly estimate the Fiedler vector of the joint state space using the Laplacian spectrum of individual agents' transition graphs. This decomposition enables us to efficiently construct multiagent joint options by encouraging agents to connect the subgoal joint states, which are corresponding to the minimum or maximum of the estimated joint Fiedler vector. Evaluation on multiagent collaborative tasks shows that our algorithm can successfully identify multiagent options and significantly outperforms prior works using single-agent options or no options, in terms of both faster exploration and higher cumulative rewards.
Persistent Identifierhttp://hdl.handle.net/10722/361670

 

DC FieldValueLanguage
dc.contributor.authorChen, Jiayu-
dc.contributor.authorChen, Jingdi-
dc.contributor.authorLan, Tian-
dc.contributor.authorAggarwal, Vaneet-
dc.date.accessioned2025-09-16T04:18:42Z-
dc.date.available2025-09-16T04:18:42Z-
dc.date.issued2023-
dc.identifier.citationIEEE Transactions on Artificial Intelligence, 2023, v. 4, n. 5, p. 1141-1153-
dc.identifier.urihttp://hdl.handle.net/10722/361670-
dc.description.abstractCovering option discovery has been developed to improve the exploration of reinforcement learning in single-agent scenarios, where only sparse reward signals are available. It aims to connect the most distant states identified through the Fiedler vector of the state transition graph. However, the approach cannot be directly extended to multiagent scenarios, since the joint state space grows exponentially with the number of agents, thus prohibiting efficient option computation. Existing research adopting options in multiagent scenarios still relies on single-agent algorithms and fails to directly discover joint options that can improve the connectivity of the joint state space. In this article, we propose a new algorithm to directly compute multiagent options with collaborative exploratory behaviors while still enjoying the ease of decomposition. Our key idea is to approximate the joint state space as the Kronecker product of individual agents' state spaces, based on which we can directly estimate the Fiedler vector of the joint state space using the Laplacian spectrum of individual agents' transition graphs. This decomposition enables us to efficiently construct multiagent joint options by encouraging agents to connect the subgoal joint states, which are corresponding to the minimum or maximum of the estimated joint Fiedler vector. Evaluation on multiagent collaborative tasks shows that our algorithm can successfully identify multiagent options and significantly outperforms prior works using single-agent options or no options, in terms of both faster exploration and higher cumulative rewards.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Artificial Intelligence-
dc.subjectKronecker product-
dc.subjectmultiagent reinforcement learning (MARL)-
dc.subjectoption discovery-
dc.titleLearning Multiagent Options for Tabular Reinforcement Learning using Factor Graphs-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TAI.2022.3195818-
dc.identifier.scopuseid_2-s2.0-85135749809-
dc.identifier.volume4-
dc.identifier.issue5-
dc.identifier.spage1141-
dc.identifier.epage1153-
dc.identifier.eissn2691-4581-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats