File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)

Article: A prompt-engineered large language model, deep learning workflow for materials classification

TitleA prompt-engineered large language model, deep learning workflow for materials classification
Authors
KeywordsDeep learning
Large language model
Materials classification
Prompt engineering
Issue Date19-Sep-2024
PublisherElsevier
Citation
Materials Today, 2024 How to Cite?
Abstract

Large language models (LLMs) have demonstrated rapid progress across a wide array of domains. Owing to the very large number of parameters and training data in LLMs, these models inherently encompass an expansive and comprehensive materials knowledge database, far exceeding the capabilities of individual researcher. Nonetheless, devising methods to harness the knowledge embedded within LLMs for the design and discovery of novel materials remains a formidable challenge. We introduce a general approach for addressing materials classification problems, which incorporates LLMs, prompt engineering, and deep learning. Utilizing a dataset of metallic glasses as a case study, our methodology achieved an improvement of up to 463% in prediction accuracy compared to conventional classification models. These findings underscore the potential of leveraging textual knowledge generated by LLMs for materials especially in the common situation where datasets are sparse, thereby promoting innovation in materials discovery and design.


Persistent Identifierhttp://hdl.handle.net/10722/350194
ISSN
2023 Impact Factor: 21.1
2023 SCImago Journal Rankings: 5.949

 

DC FieldValueLanguage
dc.contributor.authorLiu, Siyu-
dc.contributor.authorWen, Tongqi-
dc.contributor.authorPattamatta, Aditya Srinivasa-
dc.contributor.authorSrolovitz, David J.-
dc.date.accessioned2024-10-21T03:56:46Z-
dc.date.available2024-10-21T03:56:46Z-
dc.date.issued2024-09-19-
dc.identifier.citationMaterials Today, 2024-
dc.identifier.issn1369-7021-
dc.identifier.urihttp://hdl.handle.net/10722/350194-
dc.description.abstract<p>Large language models (LLMs) have demonstrated rapid progress across a wide array of domains. Owing to the very large number of parameters and training data in LLMs, these models inherently encompass an expansive and comprehensive materials knowledge database, far exceeding the capabilities of individual researcher. Nonetheless, devising methods to harness the knowledge embedded within LLMs for the design and discovery of novel materials remains a formidable challenge. We introduce a general approach for addressing materials classification problems, which incorporates LLMs, prompt engineering, and deep learning. Utilizing a dataset of metallic glasses as a case study, our methodology achieved an improvement of up to 463% in prediction accuracy compared to conventional classification models. These findings underscore the potential of leveraging textual knowledge generated by LLMs for materials especially in the common situation where datasets are sparse, thereby promoting innovation in materials discovery and design.<br></p>-
dc.languageeng-
dc.publisherElsevier-
dc.relation.ispartofMaterials Today-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectDeep learning-
dc.subjectLarge language model-
dc.subjectMaterials classification-
dc.subjectPrompt engineering-
dc.titleA prompt-engineered large language model, deep learning workflow for materials classification-
dc.typeArticle-
dc.identifier.doi10.1016/j.mattod.2024.08.028-
dc.identifier.scopuseid_2-s2.0-85204408904-
dc.identifier.eissn1873-4103-
dc.identifier.issnl1369-7021-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats