File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/ASICON47005.2019.8983497
- Scopus: eid_2-s2.0-85082595551
- WOS: WOS:000541465700071
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: A Low-Power High-Throughput In-Memory CMOS-ReRAM Accelerator for Large-Scale Deep Residual Neural Networks
Title | A Low-Power High-Throughput In-Memory CMOS-ReRAM Accelerator for Large-Scale Deep Residual Neural Networks |
---|---|
Authors | |
Keywords | Deep residual network in-memory computing CMOS-ReRAM accelerator trained quantization |
Issue Date | 2019 |
Publisher | IEEE. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/conhome/1000054/all-proceedings |
Citation | The 13th International Conference on ASIC (ASICON 2019), Chongqing, China, 29 October-1 November 2019 How to Cite? |
Abstract | We present an in-memory accelerator circuit design for ResNet-50, a large-scale residual neural network with 49 convolutional layers, 2.6 × 10 7 parameters and 4.1 × 10 9 floating-point operations (FLOPS). A 4-bit quantized ResNet-50 is first chosen among various bitwidths for the best trade-off. It is then trained and fully mapped onto a 4608 × 512 ReRAM crossbar, yielding a storage reduction from 195.2Mb to 24.3Mb and an 88.1% top-5 accuracy on ImageNet, only 2.5% lower than the full-precision original. Two versatile CMOS 4-bit DAC and ADC are designed for input and readout, allowing the proposed CMOS-ReRAM accelerator to achieve up to 15.2× runtime speedup and 498× higher energy efficiency versus the state-of-the-art CMOS-ASIC implementation. |
Persistent Identifier | http://hdl.handle.net/10722/289866 |
ISSN | 2020 SCImago Journal Rankings: 0.125 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Cheng, Y | - |
dc.contributor.author | Wong, N | - |
dc.contributor.author | Liu, X | - |
dc.contributor.author | Ni, L | - |
dc.contributor.author | Chen, HB | - |
dc.contributor.author | Yu, H | - |
dc.date.accessioned | 2020-10-22T08:18:36Z | - |
dc.date.available | 2020-10-22T08:18:36Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | The 13th International Conference on ASIC (ASICON 2019), Chongqing, China, 29 October-1 November 2019 | - |
dc.identifier.issn | 2162-7541 | - |
dc.identifier.uri | http://hdl.handle.net/10722/289866 | - |
dc.description.abstract | We present an in-memory accelerator circuit design for ResNet-50, a large-scale residual neural network with 49 convolutional layers, 2.6 × 10 7 parameters and 4.1 × 10 9 floating-point operations (FLOPS). A 4-bit quantized ResNet-50 is first chosen among various bitwidths for the best trade-off. It is then trained and fully mapped onto a 4608 × 512 ReRAM crossbar, yielding a storage reduction from 195.2Mb to 24.3Mb and an 88.1% top-5 accuracy on ImageNet, only 2.5% lower than the full-precision original. Two versatile CMOS 4-bit DAC and ADC are designed for input and readout, allowing the proposed CMOS-ReRAM accelerator to achieve up to 15.2× runtime speedup and 498× higher energy efficiency versus the state-of-the-art CMOS-ASIC implementation. | - |
dc.language | eng | - |
dc.publisher | IEEE. The Journal's web site is located at https://ieeexplore.ieee.org/xpl/conhome/1000054/all-proceedings | - |
dc.relation.ispartof | International Conference on ASIC (ASICON) | - |
dc.rights | International Conference on ASIC (ASICON). Copyright © IEEE. | - |
dc.rights | ©2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | - |
dc.subject | Deep residual network | - |
dc.subject | in-memory computing | - |
dc.subject | CMOS-ReRAM accelerator | - |
dc.subject | trained quantization | - |
dc.title | A Low-Power High-Throughput In-Memory CMOS-ReRAM Accelerator for Large-Scale Deep Residual Neural Networks | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Cheng, Y: cyuan328@hku.hk | - |
dc.identifier.email | Wong, N: nwong@eee.hku.hk | - |
dc.identifier.authority | Wong, N=rp00190 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/ASICON47005.2019.8983497 | - |
dc.identifier.scopus | eid_2-s2.0-85082595551 | - |
dc.identifier.hkuros | 315884 | - |
dc.identifier.isi | WOS:000541465700071 | - |
dc.publisher.place | United States | - |
dc.identifier.issnl | 2162-7541 | - |