File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TPAMI.2023.3319045
- Scopus: eid_2-s2.0-85173008691
- WOS: WOS:001104973300118
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Vertical Layering of Quantized Neural Networks for Heterogeneous Inference
Title | Vertical Layering of Quantized Neural Networks for Heterogeneous Inference |
---|---|
Authors | |
Keywords | bit-width scalable network Computational modeling Degradation Hardware layered coding multi-objective optimization Neural networks Optimization Quantization (signal) quantization-aware training Training |
Issue Date | 1-Dec-2023 |
Publisher | Institute of Electrical and Electronics Engineers |
Citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, v. 45, n. 12, p. 15964-15978 How to Cite? |
Abstract | Although considerable progress has been obtained in neural network quantization for efficient inference, existing methods are not scalable to heterogeneous devices as one dedicated model needs to be trained, transmitted, and stored for one specific hardware setting, incurring considerable costs in model training and maintenance. In this paper, we study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one. It represents weights as a group of bits (i.e., vertical layers) organized from the most significant bit (also called the basic layer) to less significant bits (i.e., enhance layers). Hence, a neural network with an arbitrary quantization precision can be obtained by adding corresponding enhance layers to the basic layer. However, we empirically find that models obtained with existing quantization methods suffer severe performance degradation if they are adapted to vertical-layered weight representation. To this end, we propose a simple once quantization-aware training (QAT) scheme for obtaining high-performance vertical-layered models. Our design incorporates a cascade downsampling mechanism with the multi-objective optimization employed to train the shared source model weights such that they can be updated simultaneously, considering the performance of all networks. After the model is trained, to construct a vertical-layered network, the lowest bit-width quantized weights become the basic layer, and every bit dropped along the downsampling process act as an enhance layer. Our design is extensively evaluated on CIFAR-100 and ImageNet datasets. Experiments show that the proposed vertical-layered representation and developed once QAT scheme are effective in embodying multiple quantized networks into a single one and allow one-time training, and it delivers comparable performance as that of quantized models tailored to any specific bit-width. |
Persistent Identifier | http://hdl.handle.net/10722/338164 |
ISSN | 2023 Impact Factor: 20.8 2023 SCImago Journal Rankings: 6.158 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wu, Hai | - |
dc.contributor.author | He, Ruifei | - |
dc.contributor.author | Tan, Haoru | - |
dc.contributor.author | Qi, Xiaojuan | - |
dc.contributor.author | Huang, Kaibin | - |
dc.date.accessioned | 2024-03-11T10:26:44Z | - |
dc.date.available | 2024-03-11T10:26:44Z | - |
dc.date.issued | 2023-12-01 | - |
dc.identifier.citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, v. 45, n. 12, p. 15964-15978 | - |
dc.identifier.issn | 0162-8828 | - |
dc.identifier.uri | http://hdl.handle.net/10722/338164 | - |
dc.description.abstract | <p>Although considerable progress has been obtained in neural network quantization for efficient inference, existing methods are not scalable to heterogeneous devices as one dedicated model needs to be trained, transmitted, and stored for one specific hardware setting, incurring considerable costs in model training and maintenance. In this paper, we study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one. It represents weights as a group of bits (i.e., vertical layers) organized from the most significant bit (also called the basic layer) to less significant bits (i.e., enhance layers). Hence, a neural network with an arbitrary quantization precision can be obtained by adding corresponding enhance layers to the basic layer. However, we empirically find that models obtained with existing quantization methods suffer severe performance degradation if they are adapted to vertical-layered weight representation. To this end, we propose a simple once quantization-aware training (QAT) scheme for obtaining high-performance vertical-layered models. Our design incorporates a cascade downsampling mechanism with the multi-objective optimization employed to train the shared source model weights such that they can be updated simultaneously, considering the performance of all networks. After the model is trained, to construct a vertical-layered network, the lowest bit-width quantized weights become the basic layer, and every bit dropped along the downsampling process act as an enhance layer. Our design is extensively evaluated on CIFAR-100 and ImageNet datasets. Experiments show that the proposed vertical-layered representation and developed once QAT scheme are effective in embodying multiple quantized networks into a single one and allow one-time training, and it delivers comparable performance as that of quantized models tailored to any specific bit-width.<br></p> | - |
dc.language | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers | - |
dc.relation.ispartof | IEEE Transactions on Pattern Analysis and Machine Intelligence | - |
dc.subject | bit-width scalable network | - |
dc.subject | Computational modeling | - |
dc.subject | Degradation | - |
dc.subject | Hardware | - |
dc.subject | layered coding | - |
dc.subject | multi-objective optimization | - |
dc.subject | Neural networks | - |
dc.subject | Optimization | - |
dc.subject | Quantization (signal) | - |
dc.subject | quantization-aware training | - |
dc.subject | Training | - |
dc.title | Vertical Layering of Quantized Neural Networks for Heterogeneous Inference | - |
dc.type | Article | - |
dc.identifier.doi | 10.1109/TPAMI.2023.3319045 | - |
dc.identifier.scopus | eid_2-s2.0-85173008691 | - |
dc.identifier.volume | 45 | - |
dc.identifier.issue | 12 | - |
dc.identifier.spage | 15964 | - |
dc.identifier.epage | 15978 | - |
dc.identifier.eissn | 1939-3539 | - |
dc.identifier.isi | WOS:001104973300118 | - |
dc.identifier.issnl | 0162-8828 | - |