File Download
Supplementary
-
Citations:
- Appears in Collections:
postgraduate thesis: Neural computing in random resistive memory
Title | Neural computing in random resistive memory |
---|---|
Authors | |
Advisors | |
Issue Date | 2024 |
Publisher | The University of Hong Kong (Pokfulam, Hong Kong) |
Citation | Wang, S. [王少聪]. (2024). Neural computing in random resistive memory. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. |
Abstract | Recent years have witnessed a growing interest in artificial intelligence and internet of things, both of which demand ever-increasing efficiency and computational power due to the rapid expand of data volume and model size. However, artificial intelligence systems based on conventional digital hardware are facing significant challenges, including the von Neumann bottleneck resulting from the separation of processing units and memory, the deceleration of Moore’s law due to physical scaling limits of transistors, and high training cost.
Resistive random access memory (RRAM)-based computing in memory (CIM) hardware has been developed to address these issues. This hardware exhibits outstanding scalability due to its straightforward structure and three-dimensional stackability, and can efficiently perform vector-matrix operations using Ohmn's law and Kirchhoff's current law. Despite its remarkedly improved scalibility and efficiency, the resistive memory is marred by programming non-idealities, such as high programming energy, long programming latency, and non-negligible analogue programming imprecision.
To tackle these challenges, this dissertation proposes a software hardware co-design methodology that utilizes random neural network based on random resistive memory arrays, which are constructed using low-cost, nanoscale, and stackable resistors for efficient in-memory computing. This approach leverages the intrinsic stochasticity of dielectric breakdown in resistive switching to implement random weighted connections in hardware for random networks that effectively minimize the training complexity thanks to their fixed and random weights. The co-design concept is systematically evaluated on tasks involving three fundamental data structures that are essential and common in various scientific and engineering disciplines, with hardware ranging from discrete optoelectronic short-term memory device to nonvolatile long-term random resistive memory crossbars containing approximately a hundred thousand weights.
For array-structured data, two systems are constructed using discrete optoelectronic short-term memory device and long-term random resistive memory crossbars, respectively. The first system employ the nonlinear dynamics in optical current response of the optoelectronic device for multitask optical image processing, while the second system build a random convolution-echo state network using random resistive memory crossbar for efficient time series signal processing. For graph-structured data, an echo state graph neural network is implemented with random resistive memory array, achieving state-of-the-art accuracy with significantly improved energy efficiency. For set-structured data, random resistive memory-based hardware is multiplexed across three data modality with networks as deep as six layers for unified visual learning. To mitigate accumulated noise, hardware read noise is investigated and the random sparse weight is proposed to enhance read noise robustness.
This hardware (random resistive memory) / software (random network) co-design methodology may pave the way for next generation efficient artificial intelligent edge devices.
|
Degree | Doctor of Philosophy |
Subject | Resistive switching memory Neural networks (Computer science) Artificial intelligence |
Dept/Program | Electrical and Electronic Engineering |
Persistent Identifier | http://hdl.handle.net/10722/344404 |
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Zhang, S | - |
dc.contributor.advisor | Wong, N | - |
dc.contributor.author | Wang, Shaocong | - |
dc.contributor.author | 王少聪 | - |
dc.date.accessioned | 2024-07-30T05:00:39Z | - |
dc.date.available | 2024-07-30T05:00:39Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | Wang, S. [王少聪]. (2024). Neural computing in random resistive memory. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. | - |
dc.identifier.uri | http://hdl.handle.net/10722/344404 | - |
dc.description.abstract | Recent years have witnessed a growing interest in artificial intelligence and internet of things, both of which demand ever-increasing efficiency and computational power due to the rapid expand of data volume and model size. However, artificial intelligence systems based on conventional digital hardware are facing significant challenges, including the von Neumann bottleneck resulting from the separation of processing units and memory, the deceleration of Moore’s law due to physical scaling limits of transistors, and high training cost. Resistive random access memory (RRAM)-based computing in memory (CIM) hardware has been developed to address these issues. This hardware exhibits outstanding scalability due to its straightforward structure and three-dimensional stackability, and can efficiently perform vector-matrix operations using Ohmn's law and Kirchhoff's current law. Despite its remarkedly improved scalibility and efficiency, the resistive memory is marred by programming non-idealities, such as high programming energy, long programming latency, and non-negligible analogue programming imprecision. To tackle these challenges, this dissertation proposes a software hardware co-design methodology that utilizes random neural network based on random resistive memory arrays, which are constructed using low-cost, nanoscale, and stackable resistors for efficient in-memory computing. This approach leverages the intrinsic stochasticity of dielectric breakdown in resistive switching to implement random weighted connections in hardware for random networks that effectively minimize the training complexity thanks to their fixed and random weights. The co-design concept is systematically evaluated on tasks involving three fundamental data structures that are essential and common in various scientific and engineering disciplines, with hardware ranging from discrete optoelectronic short-term memory device to nonvolatile long-term random resistive memory crossbars containing approximately a hundred thousand weights. For array-structured data, two systems are constructed using discrete optoelectronic short-term memory device and long-term random resistive memory crossbars, respectively. The first system employ the nonlinear dynamics in optical current response of the optoelectronic device for multitask optical image processing, while the second system build a random convolution-echo state network using random resistive memory crossbar for efficient time series signal processing. For graph-structured data, an echo state graph neural network is implemented with random resistive memory array, achieving state-of-the-art accuracy with significantly improved energy efficiency. For set-structured data, random resistive memory-based hardware is multiplexed across three data modality with networks as deep as six layers for unified visual learning. To mitigate accumulated noise, hardware read noise is investigated and the random sparse weight is proposed to enhance read noise robustness. This hardware (random resistive memory) / software (random network) co-design methodology may pave the way for next generation efficient artificial intelligent edge devices. | - |
dc.language | eng | - |
dc.publisher | The University of Hong Kong (Pokfulam, Hong Kong) | - |
dc.relation.ispartof | HKU Theses Online (HKUTO) | - |
dc.rights | The author retains all proprietary rights, (such as patent rights) and the right to use in future works. | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject.lcsh | Resistive switching memory | - |
dc.subject.lcsh | Neural networks (Computer science) | - |
dc.subject.lcsh | Artificial intelligence | - |
dc.title | Neural computing in random resistive memory | - |
dc.type | PG_Thesis | - |
dc.description.thesisname | Doctor of Philosophy | - |
dc.description.thesislevel | Doctoral | - |
dc.description.thesisdiscipline | Electrical and Electronic Engineering | - |
dc.description.nature | published_or_final_version | - |
dc.date.hkucongregation | 2024 | - |
dc.identifier.mmsid | 991044836038703414 | - |