File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1038/s41565-023-01343-0
- Scopus: eid_2-s2.0-85150467946
- PMID: 36941361
- WOS: WOS:000953781100004
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: An in-memory computing architecture based on a duplex two-dimensional material structure for in situ machine learning
Title | An in-memory computing architecture based on a duplex two-dimensional material structure for in situ machine learning |
---|---|
Authors | |
Issue Date | 2023 |
Citation | Nature Nanotechnology, 2023, v. 18, n. 5, p. 493-500 How to Cite? |
Abstract | The growing computational demand in artificial intelligence calls for hardware solutions that are capable of in situ machine learning, where both training and inference are performed by edge computation. This not only requires extremely energy-efficient architecture (such as in-memory computing) but also memory hardware with tunable properties to simultaneously meet the demand for training and inference. Here we report a duplex device structure based on a ferroelectric field-effect transistor and an atomically thin MoS2 channel, and realize a universal in-memory computing architecture for in situ learning. By exploiting the tunability of the ferroelectric energy landscape, the duplex building block demonstrates an overall excellent performance in endurance (>1013), retention (>10 years), speed (4.8 ns) and energy consumption (22.7 fJ bit–1 μm–2). We implemented a hardware neural network using arrays of two-transistors-one-duplex ferroelectric field-effect transistor cells and achieved 99.86% accuracy in a nonlinear localization task with in situ trained weights. Simulations show that the proposed device architecture could achieve the same level of performance as a graphics processing unit under notably improved energy efficiency. Our device core can be combined with silicon circuitry through three-dimensional heterogeneous integration to give a hardware solution towards general edge intelligence. |
Persistent Identifier | http://hdl.handle.net/10722/336371 |
ISSN | 2023 Impact Factor: 38.1 2023 SCImago Journal Rankings: 14.577 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ning, Hongkai | - |
dc.contributor.author | Yu, Zhihao | - |
dc.contributor.author | Zhang, Qingtian | - |
dc.contributor.author | Wen, Hengdi | - |
dc.contributor.author | Gao, Bin | - |
dc.contributor.author | Mao, Yun | - |
dc.contributor.author | Li, Yuankun | - |
dc.contributor.author | Zhou, Ying | - |
dc.contributor.author | Zhou, Yue | - |
dc.contributor.author | Chen, Jiewei | - |
dc.contributor.author | Liu, Lei | - |
dc.contributor.author | Wang, Wenfeng | - |
dc.contributor.author | Li, Taotao | - |
dc.contributor.author | Li, Yating | - |
dc.contributor.author | Meng, Wanqing | - |
dc.contributor.author | Li, Weisheng | - |
dc.contributor.author | Li, Yun | - |
dc.contributor.author | Qiu, Hao | - |
dc.contributor.author | Shi, Yi | - |
dc.contributor.author | Chai, Yang | - |
dc.contributor.author | Wu, Huaqiang | - |
dc.contributor.author | Wang, Xinran | - |
dc.date.accessioned | 2024-01-15T08:26:15Z | - |
dc.date.available | 2024-01-15T08:26:15Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | Nature Nanotechnology, 2023, v. 18, n. 5, p. 493-500 | - |
dc.identifier.issn | 1748-3387 | - |
dc.identifier.uri | http://hdl.handle.net/10722/336371 | - |
dc.description.abstract | The growing computational demand in artificial intelligence calls for hardware solutions that are capable of in situ machine learning, where both training and inference are performed by edge computation. This not only requires extremely energy-efficient architecture (such as in-memory computing) but also memory hardware with tunable properties to simultaneously meet the demand for training and inference. Here we report a duplex device structure based on a ferroelectric field-effect transistor and an atomically thin MoS2 channel, and realize a universal in-memory computing architecture for in situ learning. By exploiting the tunability of the ferroelectric energy landscape, the duplex building block demonstrates an overall excellent performance in endurance (>1013), retention (>10 years), speed (4.8 ns) and energy consumption (22.7 fJ bit–1 μm–2). We implemented a hardware neural network using arrays of two-transistors-one-duplex ferroelectric field-effect transistor cells and achieved 99.86% accuracy in a nonlinear localization task with in situ trained weights. Simulations show that the proposed device architecture could achieve the same level of performance as a graphics processing unit under notably improved energy efficiency. Our device core can be combined with silicon circuitry through three-dimensional heterogeneous integration to give a hardware solution towards general edge intelligence. | - |
dc.language | eng | - |
dc.relation.ispartof | Nature Nanotechnology | - |
dc.title | An in-memory computing architecture based on a duplex two-dimensional material structure for in situ machine learning | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1038/s41565-023-01343-0 | - |
dc.identifier.pmid | 36941361 | - |
dc.identifier.scopus | eid_2-s2.0-85150467946 | - |
dc.identifier.volume | 18 | - |
dc.identifier.issue | 5 | - |
dc.identifier.spage | 493 | - |
dc.identifier.epage | 500 | - |
dc.identifier.eissn | 1748-3395 | - |
dc.identifier.isi | WOS:000953781100004 | - |