File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TAI.2022.3195818
- Scopus: eid_2-s2.0-85135749809
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Learning Multiagent Options for Tabular Reinforcement Learning using Factor Graphs
| Title | Learning Multiagent Options for Tabular Reinforcement Learning using Factor Graphs |
|---|---|
| Authors | |
| Keywords | Kronecker product multiagent reinforcement learning (MARL) option discovery |
| Issue Date | 2023 |
| Citation | IEEE Transactions on Artificial Intelligence, 2023, v. 4, n. 5, p. 1141-1153 How to Cite? |
| Abstract | Covering option discovery has been developed to improve the exploration of reinforcement learning in single-agent scenarios, where only sparse reward signals are available. It aims to connect the most distant states identified through the Fiedler vector of the state transition graph. However, the approach cannot be directly extended to multiagent scenarios, since the joint state space grows exponentially with the number of agents, thus prohibiting efficient option computation. Existing research adopting options in multiagent scenarios still relies on single-agent algorithms and fails to directly discover joint options that can improve the connectivity of the joint state space. In this article, we propose a new algorithm to directly compute multiagent options with collaborative exploratory behaviors while still enjoying the ease of decomposition. Our key idea is to approximate the joint state space as the Kronecker product of individual agents' state spaces, based on which we can directly estimate the Fiedler vector of the joint state space using the Laplacian spectrum of individual agents' transition graphs. This decomposition enables us to efficiently construct multiagent joint options by encouraging agents to connect the subgoal joint states, which are corresponding to the minimum or maximum of the estimated joint Fiedler vector. Evaluation on multiagent collaborative tasks shows that our algorithm can successfully identify multiagent options and significantly outperforms prior works using single-agent options or no options, in terms of both faster exploration and higher cumulative rewards. |
| Persistent Identifier | http://hdl.handle.net/10722/361670 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Chen, Jiayu | - |
| dc.contributor.author | Chen, Jingdi | - |
| dc.contributor.author | Lan, Tian | - |
| dc.contributor.author | Aggarwal, Vaneet | - |
| dc.date.accessioned | 2025-09-16T04:18:42Z | - |
| dc.date.available | 2025-09-16T04:18:42Z | - |
| dc.date.issued | 2023 | - |
| dc.identifier.citation | IEEE Transactions on Artificial Intelligence, 2023, v. 4, n. 5, p. 1141-1153 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/361670 | - |
| dc.description.abstract | Covering option discovery has been developed to improve the exploration of reinforcement learning in single-agent scenarios, where only sparse reward signals are available. It aims to connect the most distant states identified through the Fiedler vector of the state transition graph. However, the approach cannot be directly extended to multiagent scenarios, since the joint state space grows exponentially with the number of agents, thus prohibiting efficient option computation. Existing research adopting options in multiagent scenarios still relies on single-agent algorithms and fails to directly discover joint options that can improve the connectivity of the joint state space. In this article, we propose a new algorithm to directly compute multiagent options with collaborative exploratory behaviors while still enjoying the ease of decomposition. Our key idea is to approximate the joint state space as the Kronecker product of individual agents' state spaces, based on which we can directly estimate the Fiedler vector of the joint state space using the Laplacian spectrum of individual agents' transition graphs. This decomposition enables us to efficiently construct multiagent joint options by encouraging agents to connect the subgoal joint states, which are corresponding to the minimum or maximum of the estimated joint Fiedler vector. Evaluation on multiagent collaborative tasks shows that our algorithm can successfully identify multiagent options and significantly outperforms prior works using single-agent options or no options, in terms of both faster exploration and higher cumulative rewards. | - |
| dc.language | eng | - |
| dc.relation.ispartof | IEEE Transactions on Artificial Intelligence | - |
| dc.subject | Kronecker product | - |
| dc.subject | multiagent reinforcement learning (MARL) | - |
| dc.subject | option discovery | - |
| dc.title | Learning Multiagent Options for Tabular Reinforcement Learning using Factor Graphs | - |
| dc.type | Article | - |
| dc.description.nature | link_to_subscribed_fulltext | - |
| dc.identifier.doi | 10.1109/TAI.2022.3195818 | - |
| dc.identifier.scopus | eid_2-s2.0-85135749809 | - |
| dc.identifier.volume | 4 | - |
| dc.identifier.issue | 5 | - |
| dc.identifier.spage | 1141 | - |
| dc.identifier.epage | 1153 | - |
| dc.identifier.eissn | 2691-4581 | - |
