File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: High-Dimensional Stochastic Gradient Quantization for Communication-Efficient Edge Learning

TitleHigh-Dimensional Stochastic Gradient Quantization for Communication-Efficient Edge Learning
Authors
KeywordsQuantization (signal)
Stochastic processes
Convergence
Distortion
Fasteners
Issue Date2020
PublisherIEEE. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=78
Citation
IEE Colloquium on Signal Processing, 2020, v. 68, p. 2128-2142 How to Cite?
AbstractEdge machine learning involves the deployment of learning algorithms at the wireless network edge so as to leverage massive mobile data for enabling intelligent applications. The mainstream edge learning approach, federated learning, has been developed based on distributed gradient descent. Based on the approach, stochastic gradients are computed at edge devices and then transmitted to an edge server for updating a global AI model. Since each stochastic gradient is typically high-dimensional, communication overhead becomes a bottleneck for edge learning. To address this issue, we propose a novel framework of hierarchical gradient quantization and study its effect on the learning performance. First, the framework features a practical hierarchical architecture for decomposing the stochastic gradient into its norm and normalized block gradients, and efficiently quantizes them using a uniform quantizer and a low-dimensional Grassmannian codebook, respectively. Subsequently, the quantized normalized block gradients are scaled and cascaded to yield the quantized normalized stochastic gradient using a socalled hinge vector, which is compressed using another low-dimensional Grassmannian quantizer designed under the criterion of minimum distortion. The other feature of the framework is a bit-allocation scheme for reducing the distortion, which divides the total quantization bits to determine the resolutions of low-dimensional quantizers. The framework is proved to guarantee model convergency by analyzing the convergence rate as a function of quantization bits. Furthermore, by simulation, our design is shown to substantially reduce the communication overhead compared with the state-of-the-art signSGD scheme, while achieving similar learning accuracies.
Persistent Identifierhttp://hdl.handle.net/10722/290905
ISSN
2021 Impact Factor: 4.875
2020 SCImago Journal Rankings: 1.638
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorDu, Y-
dc.contributor.authorYang, S-
dc.contributor.authorHuang, K-
dc.date.accessioned2020-11-02T05:48:45Z-
dc.date.available2020-11-02T05:48:45Z-
dc.date.issued2020-
dc.identifier.citationIEE Colloquium on Signal Processing, 2020, v. 68, p. 2128-2142-
dc.identifier.issn1053-587X-
dc.identifier.urihttp://hdl.handle.net/10722/290905-
dc.description.abstractEdge machine learning involves the deployment of learning algorithms at the wireless network edge so as to leverage massive mobile data for enabling intelligent applications. The mainstream edge learning approach, federated learning, has been developed based on distributed gradient descent. Based on the approach, stochastic gradients are computed at edge devices and then transmitted to an edge server for updating a global AI model. Since each stochastic gradient is typically high-dimensional, communication overhead becomes a bottleneck for edge learning. To address this issue, we propose a novel framework of hierarchical gradient quantization and study its effect on the learning performance. First, the framework features a practical hierarchical architecture for decomposing the stochastic gradient into its norm and normalized block gradients, and efficiently quantizes them using a uniform quantizer and a low-dimensional Grassmannian codebook, respectively. Subsequently, the quantized normalized block gradients are scaled and cascaded to yield the quantized normalized stochastic gradient using a socalled hinge vector, which is compressed using another low-dimensional Grassmannian quantizer designed under the criterion of minimum distortion. The other feature of the framework is a bit-allocation scheme for reducing the distortion, which divides the total quantization bits to determine the resolutions of low-dimensional quantizers. The framework is proved to guarantee model convergency by analyzing the convergence rate as a function of quantization bits. Furthermore, by simulation, our design is shown to substantially reduce the communication overhead compared with the state-of-the-art signSGD scheme, while achieving similar learning accuracies.-
dc.languageeng-
dc.publisherIEEE. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=78-
dc.relation.ispartofIEE Colloquium on Signal Processing-
dc.rightsIEE Colloquium on Signal Processing. Copyright © IEEE.-
dc.rights©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectQuantization (signal)-
dc.subjectStochastic processes-
dc.subjectConvergence-
dc.subjectDistortion-
dc.subjectFasteners-
dc.titleHigh-Dimensional Stochastic Gradient Quantization for Communication-Efficient Edge Learning-
dc.typeArticle-
dc.identifier.emailHuang, K: huangkb@eee.hku.hk-
dc.identifier.authorityHuang, K=rp01875-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TSP.2020.2983166-
dc.identifier.scopuseid_2-s2.0-85084391890-
dc.identifier.hkuros318038-
dc.identifier.volume68-
dc.identifier.spage2128-
dc.identifier.epage2142-
dc.identifier.isiWOS:000531386200001-
dc.publisher.placeUnited States-
dc.identifier.issnl1053-587X-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats