File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Multi-agent cooperative alternating Q-learning caching in D2D-enabled cellular networks

TitleMulti-agent cooperative alternating Q-learning caching in D2D-enabled cellular networks
Authors
Issue Date2019
Citation
Proceedings - IEEE Global Communications Conference, GLOBECOM, 2019, article no. 9014053 How to Cite?
AbstractEdge caching has become an effective solution to cope with the challenges brought by the massive content delivery in cellular networks. In device- to-device (D2D) enabled caching cellular networks with time-varying user terminal (UT) movement and content popularity, we model these dynamic networks as a stochastic game to design a cooperative caching placement strategy. We consider the long-term caching placement reward of all UTs. Each UT becomes a learning agent and the caching placement strategy corresponds to the actions taken by the UTs. In an effort to solve the stochastic game problem, we propose a multi- agent cooperative alternating Q-learning (CAQL) caching placement algorithm. We discuss the convergence and complexity of CAQL, which can converge to a stable caching policy with low space complexity. Simulation results show that the proposed algorithm can effectively reduce the backhaul load and the average content access delay in dynamic environment.
Persistent Identifierhttp://hdl.handle.net/10722/349412
ISSN

 

DC FieldValueLanguage
dc.contributor.authorFang, Xinyuan-
dc.contributor.authorZhang, Tiankui-
dc.contributor.authorLiu, Yuanwei-
dc.contributor.authorZeng, Zhimin-
dc.date.accessioned2024-10-17T06:58:21Z-
dc.date.available2024-10-17T06:58:21Z-
dc.date.issued2019-
dc.identifier.citationProceedings - IEEE Global Communications Conference, GLOBECOM, 2019, article no. 9014053-
dc.identifier.issn2334-0983-
dc.identifier.urihttp://hdl.handle.net/10722/349412-
dc.description.abstractEdge caching has become an effective solution to cope with the challenges brought by the massive content delivery in cellular networks. In device- to-device (D2D) enabled caching cellular networks with time-varying user terminal (UT) movement and content popularity, we model these dynamic networks as a stochastic game to design a cooperative caching placement strategy. We consider the long-term caching placement reward of all UTs. Each UT becomes a learning agent and the caching placement strategy corresponds to the actions taken by the UTs. In an effort to solve the stochastic game problem, we propose a multi- agent cooperative alternating Q-learning (CAQL) caching placement algorithm. We discuss the convergence and complexity of CAQL, which can converge to a stable caching policy with low space complexity. Simulation results show that the proposed algorithm can effectively reduce the backhaul load and the average content access delay in dynamic environment.-
dc.languageeng-
dc.relation.ispartofProceedings - IEEE Global Communications Conference, GLOBECOM-
dc.titleMulti-agent cooperative alternating Q-learning caching in D2D-enabled cellular networks-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/GLOBECOM38437.2019.9014053-
dc.identifier.scopuseid_2-s2.0-85081967608-
dc.identifier.spagearticle no. 9014053-
dc.identifier.epagearticle no. 9014053-
dc.identifier.eissn2576-6813-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats