File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: An in-memory computing architecture based on a duplex two-dimensional material structure for in situ machine learning

TitleAn in-memory computing architecture based on a duplex two-dimensional material structure for in situ machine learning
Authors
Issue Date2023
Citation
Nature Nanotechnology, 2023, v. 18, n. 5, p. 493-500 How to Cite?
AbstractThe growing computational demand in artificial intelligence calls for hardware solutions that are capable of in situ machine learning, where both training and inference are performed by edge computation. This not only requires extremely energy-efficient architecture (such as in-memory computing) but also memory hardware with tunable properties to simultaneously meet the demand for training and inference. Here we report a duplex device structure based on a ferroelectric field-effect transistor and an atomically thin MoS2 channel, and realize a universal in-memory computing architecture for in situ learning. By exploiting the tunability of the ferroelectric energy landscape, the duplex building block demonstrates an overall excellent performance in endurance (>1013), retention (>10 years), speed (4.8 ns) and energy consumption (22.7 fJ bit–1 μm–2). We implemented a hardware neural network using arrays of two-transistors-one-duplex ferroelectric field-effect transistor cells and achieved 99.86% accuracy in a nonlinear localization task with in situ trained weights. Simulations show that the proposed device architecture could achieve the same level of performance as a graphics processing unit under notably improved energy efficiency. Our device core can be combined with silicon circuitry through three-dimensional heterogeneous integration to give a hardware solution towards general edge intelligence.
Persistent Identifierhttp://hdl.handle.net/10722/336371
ISSN
2023 Impact Factor: 38.1
2023 SCImago Journal Rankings: 14.577
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorNing, Hongkai-
dc.contributor.authorYu, Zhihao-
dc.contributor.authorZhang, Qingtian-
dc.contributor.authorWen, Hengdi-
dc.contributor.authorGao, Bin-
dc.contributor.authorMao, Yun-
dc.contributor.authorLi, Yuankun-
dc.contributor.authorZhou, Ying-
dc.contributor.authorZhou, Yue-
dc.contributor.authorChen, Jiewei-
dc.contributor.authorLiu, Lei-
dc.contributor.authorWang, Wenfeng-
dc.contributor.authorLi, Taotao-
dc.contributor.authorLi, Yating-
dc.contributor.authorMeng, Wanqing-
dc.contributor.authorLi, Weisheng-
dc.contributor.authorLi, Yun-
dc.contributor.authorQiu, Hao-
dc.contributor.authorShi, Yi-
dc.contributor.authorChai, Yang-
dc.contributor.authorWu, Huaqiang-
dc.contributor.authorWang, Xinran-
dc.date.accessioned2024-01-15T08:26:15Z-
dc.date.available2024-01-15T08:26:15Z-
dc.date.issued2023-
dc.identifier.citationNature Nanotechnology, 2023, v. 18, n. 5, p. 493-500-
dc.identifier.issn1748-3387-
dc.identifier.urihttp://hdl.handle.net/10722/336371-
dc.description.abstractThe growing computational demand in artificial intelligence calls for hardware solutions that are capable of in situ machine learning, where both training and inference are performed by edge computation. This not only requires extremely energy-efficient architecture (such as in-memory computing) but also memory hardware with tunable properties to simultaneously meet the demand for training and inference. Here we report a duplex device structure based on a ferroelectric field-effect transistor and an atomically thin MoS2 channel, and realize a universal in-memory computing architecture for in situ learning. By exploiting the tunability of the ferroelectric energy landscape, the duplex building block demonstrates an overall excellent performance in endurance (>1013), retention (>10 years), speed (4.8 ns) and energy consumption (22.7 fJ bit–1 μm–2). We implemented a hardware neural network using arrays of two-transistors-one-duplex ferroelectric field-effect transistor cells and achieved 99.86% accuracy in a nonlinear localization task with in situ trained weights. Simulations show that the proposed device architecture could achieve the same level of performance as a graphics processing unit under notably improved energy efficiency. Our device core can be combined with silicon circuitry through three-dimensional heterogeneous integration to give a hardware solution towards general edge intelligence.-
dc.languageeng-
dc.relation.ispartofNature Nanotechnology-
dc.titleAn in-memory computing architecture based on a duplex two-dimensional material structure for in situ machine learning-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1038/s41565-023-01343-0-
dc.identifier.pmid36941361-
dc.identifier.scopuseid_2-s2.0-85150467946-
dc.identifier.volume18-
dc.identifier.issue5-
dc.identifier.spage493-
dc.identifier.epage500-
dc.identifier.eissn1748-3395-
dc.identifier.isiWOS:000953781100004-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats