File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/ICSICT55466.2022.9963263
- Scopus: eid_2-s2.0-85143977797
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: A Hardware-Aware Neural Architecture Search Pareto Front Exploration for In-Memory Computing
Title | A Hardware-Aware Neural Architecture Search Pareto Front Exploration for In-Memory Computing |
---|---|
Authors | |
Issue Date | 25-Oct-2022 |
Publisher | IEEE |
Abstract | Traditional neural networks deployed on CPU/GPU architectures have achieved impressive results on various AI tasks. However, the growing model sizes and intensive computation have presented stringent challenges for deployment on edge devices with restrictive compute and storage resources. This paper proposes a one-shot training-evaluation framework to solve the neural architecture search (NAS) problem for in-memory computing, targeting the emerging resistive random-access memory (RRAM) analog AI platform. We test inference accuracy and hardware performance of subnets sampled in different dimensions of a pretrained supernet. Experiments show that the proposed one-shot hardware-aware NAS (HW-NAS) framework can effectively explore the Pareto front considering both accuracy and hardware performance, and generate more optimal models via morphing a standard backbone model. |
Persistent Identifier | http://hdl.handle.net/10722/339479 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Guan, Ziyi | - |
dc.contributor.author | Zhou, Wenyong | - |
dc.contributor.author | Ren, Yuan | - |
dc.contributor.author | Xie, Rui | - |
dc.contributor.author | Yu, Hao | - |
dc.contributor.author | Wong, Ngai | - |
dc.date.accessioned | 2024-03-11T10:36:58Z | - |
dc.date.available | 2024-03-11T10:36:58Z | - |
dc.date.issued | 2022-10-25 | - |
dc.identifier.uri | http://hdl.handle.net/10722/339479 | - |
dc.description.abstract | <p>Traditional neural networks deployed on CPU/GPU architectures have achieved impressive results on various AI tasks. However, the growing model sizes and intensive computation have presented stringent challenges for deployment on edge devices with restrictive compute and storage resources. This paper proposes a one-shot training-evaluation framework to solve the neural architecture search (NAS) problem for in-memory computing, targeting the emerging resistive random-access memory (RRAM) analog AI platform. We test inference accuracy and hardware performance of subnets sampled in different dimensions of a pretrained supernet. Experiments show that the proposed one-shot hardware-aware NAS (HW-NAS) framework can effectively explore the Pareto front considering both accuracy and hardware performance, and generate more optimal models via morphing a standard backbone model.<br></p> | - |
dc.language | eng | - |
dc.publisher | IEEE | - |
dc.relation.ispartof | IEEE 16th International Conference on Solid-State & Integrated Circuit Technology (ICSICT) (25/10/2022-28/10/2022, , , Nanjing) | - |
dc.title | A Hardware-Aware Neural Architecture Search Pareto Front Exploration for In-Memory Computing | - |
dc.type | Conference_Paper | - |
dc.identifier.doi | 10.1109/ICSICT55466.2022.9963263 | - |
dc.identifier.scopus | eid_2-s2.0-85143977797 | - |