File Download
Supplementary

postgraduate thesis: Deep reinforcement learning based efficient resource management in vehicular edge networks

TitleDeep reinforcement learning based efficient resource management in vehicular edge networks
Authors
Advisors
Advisor(s):Kwok, YK
Issue Date2020
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Citation
Guo, Y. [郭妍湘]. (2020). Deep reinforcement learning based efficient resource management in vehicular edge networks. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.
AbstractWith the emergence of Internet of Things (IoT), a large amount of data has been generated. However, the data processing methods of traditional Mobile Cloud Computing (MCC) may not fully meet the user demands due to the limitations of network bandwidth and processing capacity of terminal devices. As an important branch of the IoT, Internet of Vehicle (IoV) also requires high quality of service for vehicles. Thus, Mobile Edge Computing (MEC) could be leveraged to supplement MCC to provide satisfactory services for mobile devices to perform computational-intensive and latency-sensitive tasks. In order to achieve efficient resource management in IoV, both caching and offloading techniques could be applied in vehicular networks, which are the two main research works in this thesis. Caching on edge nodes has become one of the key technologies to deal with the current mass data, which could effectively reduce the burden on the vehicular networks. On the other hand, tasks could be offloaded to the MEC servers when the ability of mobile devices to process data does not satisfy its own needs. As for the first study about caching, a Markov Deep Q-Learning (MDQL) model to formulate the caching strategy is proposed. How to pre-cache packets at edge nodes to reduce pre-fetch redundancy, and improve data transmission efficiency is investigated via mobility prediction of vehicles. The k-order Markov model is first used to predict the mobility of vehicles, and the prediction results are used as the input of deep reinforcement learning for training. The MDQL model can reduce the size of the action space and the computational complexity of DRL while considering the balance between the cache hit rate and the cache replacement rate. The second work of this thesis focuses on designing an appropriate offloading strategy in IoV. Most existing computational offloading strategies usually only consider the use of one single resource for computational offloading. Thus, in this part of study, an energy-aware multi-resource collaborative computational offloading (MCCO) algorithm based on deep reinforcement learning to solve the multi-user computing offloading problem under multi-resource conditions is proposed. Therefore, in this thesis, caching and offloading strategies, MDQL model and MCCO algorithm respectively, have been proposed to achieve efficient resource management in IoV. The results of simulation experiments have shown the effectiveness of the proposed strategies.
DegreeMaster of Philosophy
SubjectInternet of things
Machine learning
Artificial intelligence
Dept/ProgramElectrical and Electronic Engineering
Persistent Identifierhttp://hdl.handle.net/10722/288520

 

DC FieldValueLanguage
dc.contributor.advisorKwok, YK-
dc.contributor.authorGuo, Yanxiang-
dc.contributor.author郭妍湘-
dc.date.accessioned2020-10-06T01:20:47Z-
dc.date.available2020-10-06T01:20:47Z-
dc.date.issued2020-
dc.identifier.citationGuo, Y. [郭妍湘]. (2020). Deep reinforcement learning based efficient resource management in vehicular edge networks. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.-
dc.identifier.urihttp://hdl.handle.net/10722/288520-
dc.description.abstractWith the emergence of Internet of Things (IoT), a large amount of data has been generated. However, the data processing methods of traditional Mobile Cloud Computing (MCC) may not fully meet the user demands due to the limitations of network bandwidth and processing capacity of terminal devices. As an important branch of the IoT, Internet of Vehicle (IoV) also requires high quality of service for vehicles. Thus, Mobile Edge Computing (MEC) could be leveraged to supplement MCC to provide satisfactory services for mobile devices to perform computational-intensive and latency-sensitive tasks. In order to achieve efficient resource management in IoV, both caching and offloading techniques could be applied in vehicular networks, which are the two main research works in this thesis. Caching on edge nodes has become one of the key technologies to deal with the current mass data, which could effectively reduce the burden on the vehicular networks. On the other hand, tasks could be offloaded to the MEC servers when the ability of mobile devices to process data does not satisfy its own needs. As for the first study about caching, a Markov Deep Q-Learning (MDQL) model to formulate the caching strategy is proposed. How to pre-cache packets at edge nodes to reduce pre-fetch redundancy, and improve data transmission efficiency is investigated via mobility prediction of vehicles. The k-order Markov model is first used to predict the mobility of vehicles, and the prediction results are used as the input of deep reinforcement learning for training. The MDQL model can reduce the size of the action space and the computational complexity of DRL while considering the balance between the cache hit rate and the cache replacement rate. The second work of this thesis focuses on designing an appropriate offloading strategy in IoV. Most existing computational offloading strategies usually only consider the use of one single resource for computational offloading. Thus, in this part of study, an energy-aware multi-resource collaborative computational offloading (MCCO) algorithm based on deep reinforcement learning to solve the multi-user computing offloading problem under multi-resource conditions is proposed. Therefore, in this thesis, caching and offloading strategies, MDQL model and MCCO algorithm respectively, have been proposed to achieve efficient resource management in IoV. The results of simulation experiments have shown the effectiveness of the proposed strategies.-
dc.languageeng-
dc.publisherThe University of Hong Kong (Pokfulam, Hong Kong)-
dc.relation.ispartofHKU Theses Online (HKUTO)-
dc.rightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works.-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject.lcshInternet of things-
dc.subject.lcshMachine learning-
dc.subject.lcshArtificial intelligence-
dc.titleDeep reinforcement learning based efficient resource management in vehicular edge networks-
dc.typePG_Thesis-
dc.description.thesisnameMaster of Philosophy-
dc.description.thesislevelMaster-
dc.description.thesisdisciplineElectrical and Electronic Engineering-
dc.description.naturepublished_or_final_version-
dc.date.hkucongregation2020-
dc.identifier.mmsid991044284192003414-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats