File Download
Supplementary
-
Citations:
- Appears in Collections:
postgraduate thesis: Deep reinforcement learning based efficient resource management in vehicular edge networks
Title | Deep reinforcement learning based efficient resource management in vehicular edge networks |
---|---|
Authors | |
Advisors | Advisor(s):Kwok, YK |
Issue Date | 2020 |
Publisher | The University of Hong Kong (Pokfulam, Hong Kong) |
Citation | Guo, Y. [郭妍湘]. (2020). Deep reinforcement learning based efficient resource management in vehicular edge networks. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. |
Abstract | With the emergence of Internet of Things (IoT), a large amount of data has been generated. However, the data processing methods of traditional Mobile Cloud Computing (MCC) may not fully meet the user demands due to the limitations of network bandwidth and processing capacity of terminal devices. As an important branch of the IoT, Internet of Vehicle (IoV) also requires high quality of service for vehicles. Thus, Mobile Edge Computing (MEC) could be leveraged to supplement MCC to provide satisfactory services for mobile devices to perform computational-intensive and latency-sensitive tasks.
In order to achieve efficient resource management in IoV, both caching and offloading techniques could be applied in vehicular networks, which are the two main research works in this thesis. Caching on edge nodes has become one of the key technologies to deal with the current mass data, which could effectively reduce the burden on the vehicular networks. On the other hand, tasks could be offloaded to the MEC servers when the ability of mobile devices to process data does not satisfy its own needs.
As for the first study about caching, a Markov Deep Q-Learning (MDQL) model to formulate the caching strategy is proposed. How to pre-cache packets at edge nodes to reduce pre-fetch redundancy, and improve data transmission efficiency is investigated via mobility prediction of vehicles. The k-order Markov model is first used to predict the mobility of vehicles, and the prediction results are used as the input of deep reinforcement learning for training. The MDQL model can reduce the size of the action space and the computational complexity of DRL while considering the balance between the cache hit rate and the cache replacement rate.
The second work of this thesis focuses on designing an appropriate offloading strategy in IoV. Most existing computational offloading strategies usually only consider the use of one single resource for computational offloading. Thus, in this part of study, an energy-aware multi-resource collaborative computational offloading (MCCO) algorithm based on deep reinforcement learning to solve the multi-user computing offloading problem under multi-resource conditions is proposed.
Therefore, in this thesis, caching and offloading strategies, MDQL model and MCCO algorithm respectively, have been proposed to achieve efficient resource management in IoV. The results of simulation experiments have shown the effectiveness of the proposed strategies. |
Degree | Master of Philosophy |
Subject | Internet of things Machine learning Artificial intelligence |
Dept/Program | Electrical and Electronic Engineering |
Persistent Identifier | http://hdl.handle.net/10722/288520 |
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Kwok, YK | - |
dc.contributor.author | Guo, Yanxiang | - |
dc.contributor.author | 郭妍湘 | - |
dc.date.accessioned | 2020-10-06T01:20:47Z | - |
dc.date.available | 2020-10-06T01:20:47Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | Guo, Y. [郭妍湘]. (2020). Deep reinforcement learning based efficient resource management in vehicular edge networks. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. | - |
dc.identifier.uri | http://hdl.handle.net/10722/288520 | - |
dc.description.abstract | With the emergence of Internet of Things (IoT), a large amount of data has been generated. However, the data processing methods of traditional Mobile Cloud Computing (MCC) may not fully meet the user demands due to the limitations of network bandwidth and processing capacity of terminal devices. As an important branch of the IoT, Internet of Vehicle (IoV) also requires high quality of service for vehicles. Thus, Mobile Edge Computing (MEC) could be leveraged to supplement MCC to provide satisfactory services for mobile devices to perform computational-intensive and latency-sensitive tasks. In order to achieve efficient resource management in IoV, both caching and offloading techniques could be applied in vehicular networks, which are the two main research works in this thesis. Caching on edge nodes has become one of the key technologies to deal with the current mass data, which could effectively reduce the burden on the vehicular networks. On the other hand, tasks could be offloaded to the MEC servers when the ability of mobile devices to process data does not satisfy its own needs. As for the first study about caching, a Markov Deep Q-Learning (MDQL) model to formulate the caching strategy is proposed. How to pre-cache packets at edge nodes to reduce pre-fetch redundancy, and improve data transmission efficiency is investigated via mobility prediction of vehicles. The k-order Markov model is first used to predict the mobility of vehicles, and the prediction results are used as the input of deep reinforcement learning for training. The MDQL model can reduce the size of the action space and the computational complexity of DRL while considering the balance between the cache hit rate and the cache replacement rate. The second work of this thesis focuses on designing an appropriate offloading strategy in IoV. Most existing computational offloading strategies usually only consider the use of one single resource for computational offloading. Thus, in this part of study, an energy-aware multi-resource collaborative computational offloading (MCCO) algorithm based on deep reinforcement learning to solve the multi-user computing offloading problem under multi-resource conditions is proposed. Therefore, in this thesis, caching and offloading strategies, MDQL model and MCCO algorithm respectively, have been proposed to achieve efficient resource management in IoV. The results of simulation experiments have shown the effectiveness of the proposed strategies. | - |
dc.language | eng | - |
dc.publisher | The University of Hong Kong (Pokfulam, Hong Kong) | - |
dc.relation.ispartof | HKU Theses Online (HKUTO) | - |
dc.rights | The author retains all proprietary rights, (such as patent rights) and the right to use in future works. | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject.lcsh | Internet of things | - |
dc.subject.lcsh | Machine learning | - |
dc.subject.lcsh | Artificial intelligence | - |
dc.title | Deep reinforcement learning based efficient resource management in vehicular edge networks | - |
dc.type | PG_Thesis | - |
dc.description.thesisname | Master of Philosophy | - |
dc.description.thesislevel | Master | - |
dc.description.thesisdiscipline | Electrical and Electronic Engineering | - |
dc.description.nature | published_or_final_version | - |
dc.date.hkucongregation | 2020 | - |
dc.identifier.mmsid | 991044284192003414 | - |