File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Accelerating DNN Inference With Reliability Guarantee in Vehicular Edge Computing

TitleAccelerating DNN Inference With Reliability Guarantee in Vehicular Edge Computing
Authors
KeywordsApproximation algorithms
Computational modeling
Data models
DNN inference acceleration
mobility-aware offloading
overlapped partitioning
Peer-to-peer computing
Reliability
reliability guarantee
Resource management
Task analysis
Vehicular edge computing
Issue Date1-Jun-2023
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE/ACM Transactions on Networking, 2023 How to Cite?
AbstractThis paper explores on accelerating Deep Neural Network (DNN) inference with reliability guarantee in Vehicular Edge Computing (VEC) by considering the synergistic impacts of vehicle mobility and Vehicle-to-Vehicle/Infrastructure (V2V/V2I) communications. First, we show the necessity of striking a balance between DNN inference acceleration and reliability in VEC, and give insights into the design rationale by analyzing the features of overlapped DNN partitioning and mobility-aware task offloading. Second, we formulate the Cooperative Partitioning and Offloading (CPO) problem by presenting a cooperative DNN partitioning and offloading scenario, followed by deriving an offloading reliability model and a DNN inference delay model. The CPO is proved as NP-hard. Third, we propose two approximation algorithms, i.e., Submodular Approximation Allocation Algorithm (SA(3)) and Feed Me the Rest algorithm (FMtR). In particular, SA(3) determines the edge allocation in a centralized way, which achieves 1/3-optimal approximation on maximizing the inference reliability. On this basis, FMtR partitions the DNN models and offloads the tasks to the allocated edge nodes in a distributed way, which achieves 1/2-optimal approximation on maximizing the inference reliability. Finally, we build the simulation model and give a comprehensive per-formance evaluation, which demonstrates the superiority of the proposed solutions.
Persistent Identifierhttp://hdl.handle.net/10722/357014
ISSN
2023 Impact Factor: 3.0
2023 SCImago Journal Rankings: 2.034
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLiu, K-
dc.contributor.authorLiu, CH-
dc.contributor.authorYan, GZ-
dc.contributor.authorLee, VCS-
dc.contributor.authorCao, JN-
dc.date.accessioned2025-06-23T08:52:56Z-
dc.date.available2025-06-23T08:52:56Z-
dc.date.issued2023-06-01-
dc.identifier.citationIEEE/ACM Transactions on Networking, 2023-
dc.identifier.issn1063-6692-
dc.identifier.urihttp://hdl.handle.net/10722/357014-
dc.description.abstractThis paper explores on accelerating Deep Neural Network (DNN) inference with reliability guarantee in Vehicular Edge Computing (VEC) by considering the synergistic impacts of vehicle mobility and Vehicle-to-Vehicle/Infrastructure (V2V/V2I) communications. First, we show the necessity of striking a balance between DNN inference acceleration and reliability in VEC, and give insights into the design rationale by analyzing the features of overlapped DNN partitioning and mobility-aware task offloading. Second, we formulate the Cooperative Partitioning and Offloading (CPO) problem by presenting a cooperative DNN partitioning and offloading scenario, followed by deriving an offloading reliability model and a DNN inference delay model. The CPO is proved as NP-hard. Third, we propose two approximation algorithms, i.e., Submodular Approximation Allocation Algorithm (SA(3)) and Feed Me the Rest algorithm (FMtR). In particular, SA(3) determines the edge allocation in a centralized way, which achieves 1/3-optimal approximation on maximizing the inference reliability. On this basis, FMtR partitions the DNN models and offloads the tasks to the allocated edge nodes in a distributed way, which achieves 1/2-optimal approximation on maximizing the inference reliability. Finally, we build the simulation model and give a comprehensive per-formance evaluation, which demonstrates the superiority of the proposed solutions.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE/ACM Transactions on Networking-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectApproximation algorithms-
dc.subjectComputational modeling-
dc.subjectData models-
dc.subjectDNN inference acceleration-
dc.subjectmobility-aware offloading-
dc.subjectoverlapped partitioning-
dc.subjectPeer-to-peer computing-
dc.subjectReliability-
dc.subjectreliability guarantee-
dc.subjectResource management-
dc.subjectTask analysis-
dc.subjectVehicular edge computing-
dc.titleAccelerating DNN Inference With Reliability Guarantee in Vehicular Edge Computing-
dc.typeArticle-
dc.identifier.doi10.1109/TNET.2023.3279512-
dc.identifier.scopuseid_2-s2.0-85161604175-
dc.identifier.eissn1558-2566-
dc.identifier.isiWOS:001006077400001-
dc.publisher.placePISCATAWAY-
dc.identifier.issnl1063-6692-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats