File Download

There are no files associated with this item.

Supplementary

Conference Paper: GIMLET: A Unified Graph-Text Model for Instruction-Based Molecule Zero-Shot Learning

TitleGIMLET: A Unified Graph-Text Model for Instruction-Based Molecule Zero-Shot Learning
Authors
Issue Date30-Nov-2023
Abstract

Molecule property prediction has gained significant attention in recent years. The main bottleneck is the label insufficiency caused by expensive lab experiments. In order to alleviate this issue and to better leverage textual knowledge for tasks, this study investigates the feasibility of employing natural language instructions to accomplish molecule-related tasks in a zero-shot setting. We discover that existing molecule-text models perform poorly in this setting due to inadequate treatment of instructions and limited capacity for graphs. To overcome these issues, we propose GIMLET, which unifies language models for both graph and text data. By adopting generalized position embedding, our model is extended to encode both graph structures and instruction text without additional graph encoding modules. GIMLET also decouples encoding of the graph from tasks instructions in the attention mechanism, enhancing the generalization of graph features across novel tasks. We construct a dataset consisting of more than two thousand molecule tasks with corresponding instructions derived from task descriptions. We pretrain GIMLET on the molecule tasks along with instructions, enabling the model to transfer effectively to a broad range of tasks. Experimental results demonstrate that GIMLET significantly outperforms molecule-text baselines in instruction-based zero-shot learning, even achieving closed results to supervised GNN models on tasks such as toxcast and muv.


Persistent Identifierhttp://hdl.handle.net/10722/339086

 

DC FieldValueLanguage
dc.contributor.authorZhao, Haiteng-
dc.contributor.authorLiu, Shengchao-
dc.contributor.authorMa, Chang-
dc.contributor.authorXu, Hannan-
dc.contributor.authorFu, Jie-
dc.contributor.authorDeng, Zhi-Hong-
dc.contributor.authorKong, Lingpeng-
dc.contributor.authorLiu, Qi-
dc.date.accessioned2024-03-11T10:33:48Z-
dc.date.available2024-03-11T10:33:48Z-
dc.date.issued2023-11-30-
dc.identifier.urihttp://hdl.handle.net/10722/339086-
dc.description.abstract<p>Molecule property prediction has gained significant attention in recent years. The main bottleneck is the label insufficiency caused by expensive lab experiments. In order to alleviate this issue and to better leverage textual knowledge for tasks, this study investigates the feasibility of employing natural language instructions to accomplish molecule-related tasks in a zero-shot setting. We discover that existing molecule-text models perform poorly in this setting due to inadequate treatment of instructions and limited capacity for graphs. To overcome these issues, we propose GIMLET, which unifies language models for both graph and text data. By adopting generalized position embedding, our model is extended to encode both graph structures and instruction text without additional graph encoding modules. GIMLET also decouples encoding of the graph from tasks instructions in the attention mechanism, enhancing the generalization of graph features across novel tasks. We construct a dataset consisting of more than two thousand molecule tasks with corresponding instructions derived from task descriptions. We pretrain GIMLET on the molecule tasks along with instructions, enabling the model to transfer effectively to a broad range of tasks. Experimental results demonstrate that GIMLET significantly outperforms molecule-text baselines in instruction-based zero-shot learning, even achieving closed results to supervised GNN models on tasks such as toxcast and muv.</p>-
dc.languageeng-
dc.relation.ispartofNeurIPS (01/12/2023-07/12/2023)-
dc.titleGIMLET: A Unified Graph-Text Model for Instruction-Based Molecule Zero-Shot Learning-
dc.typeConference_Paper-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats