File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/JSAC.2019.2927068
- Scopus: eid_2-s2.0-85068566932
- WOS: WOS:000480347600012
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Scaling Geo-distributed Network Function Chains: A Prediction and Learning Framework
Title | Scaling Geo-distributed Network Function Chains: A Prediction and Learning Framework |
---|---|
Authors | |
Keywords | deep learning network function virtualization reinforcement learning service function chain |
Issue Date | 2019 |
Publisher | Institute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=49 |
Citation | IEEE Journal on Selected Areas in Communications, 2019, v. 37 n. 8, p. 1838-1850 How to Cite? |
Abstract | Geo-distributed virtual network function (VNF) chaining has been useful, such as in network slicing in 5G networks and for network traffic processing in the WAN. Agile scaling of the VNF chains according to real-time traffic rates is the key in network function virtualization. Designing efficient scaling algorithms is challenging, especially for geo-distributed chains, where bandwidth costs and latencies incurred by the WAN traffic are important but difficult to handle in making scaling decisions. Existing studies have largely resorted to optimization algorithms in scaling design. Aiming at better decisions empowered by in-depth learning from experiences, this paper proposes a deep learning-based framework for scaling of the geo-distributed VNF chains, exploring inherent pattern of traffic variation and good deployment strategies over time. We novelly combine a recurrent neural network as the traffic model for predicting upcoming flow rates and a deep reinforcement learning (DRL) agent for making chain placement decisions. We adopt the experience replay technique based on the actor-critic DRL algorithm to optimize the learning results. Trace-driven simulation shows that with limited offline training, our learning framework adapts quickly to traffic dynamics online and achieves lower system costs, compared to the existing representative algorithms. |
Persistent Identifier | http://hdl.handle.net/10722/273139 |
ISSN | 2023 Impact Factor: 13.8 2023 SCImago Journal Rankings: 8.707 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | LUO, Z | - |
dc.contributor.author | Wu, C | - |
dc.contributor.author | Li, Z | - |
dc.contributor.author | Zhou, W | - |
dc.date.accessioned | 2019-08-06T09:23:16Z | - |
dc.date.available | 2019-08-06T09:23:16Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | IEEE Journal on Selected Areas in Communications, 2019, v. 37 n. 8, p. 1838-1850 | - |
dc.identifier.issn | 0733-8716 | - |
dc.identifier.uri | http://hdl.handle.net/10722/273139 | - |
dc.description.abstract | Geo-distributed virtual network function (VNF) chaining has been useful, such as in network slicing in 5G networks and for network traffic processing in the WAN. Agile scaling of the VNF chains according to real-time traffic rates is the key in network function virtualization. Designing efficient scaling algorithms is challenging, especially for geo-distributed chains, where bandwidth costs and latencies incurred by the WAN traffic are important but difficult to handle in making scaling decisions. Existing studies have largely resorted to optimization algorithms in scaling design. Aiming at better decisions empowered by in-depth learning from experiences, this paper proposes a deep learning-based framework for scaling of the geo-distributed VNF chains, exploring inherent pattern of traffic variation and good deployment strategies over time. We novelly combine a recurrent neural network as the traffic model for predicting upcoming flow rates and a deep reinforcement learning (DRL) agent for making chain placement decisions. We adopt the experience replay technique based on the actor-critic DRL algorithm to optimize the learning results. Trace-driven simulation shows that with limited offline training, our learning framework adapts quickly to traffic dynamics online and achieves lower system costs, compared to the existing representative algorithms. | - |
dc.language | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=49 | - |
dc.relation.ispartof | IEEE Journal on Selected Areas in Communications | - |
dc.rights | IEEE Journal on Selected Areas in Communications. Copyright © Institute of Electrical and Electronics Engineers. | - |
dc.rights | ©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | - |
dc.subject | deep learning | - |
dc.subject | network function virtualization | - |
dc.subject | reinforcement learning | - |
dc.subject | service function chain | - |
dc.title | Scaling Geo-distributed Network Function Chains: A Prediction and Learning Framework | - |
dc.type | Article | - |
dc.identifier.email | Wu, C: cwu@cs.hku.hk | - |
dc.identifier.authority | Wu, C=rp01397 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/JSAC.2019.2927068 | - |
dc.identifier.scopus | eid_2-s2.0-85068566932 | - |
dc.identifier.hkuros | 299703 | - |
dc.identifier.volume | 37 | - |
dc.identifier.issue | 8 | - |
dc.identifier.spage | 1838 | - |
dc.identifier.epage | 1850 | - |
dc.identifier.isi | WOS:000480347600012 | - |
dc.publisher.place | United States | - |
dc.identifier.issnl | 0733-8716 | - |