The official repository of "What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks". https://arxiv.org/abs/2305.18365
The followings are our prompt used in the paper. It's extremely easy to try your own designed prompt! Only need to change the prompt in the Jupyter code of each task and then we can see the results and performance.
The datasets of some tasks are already uploaded in this repository. Becuase of the size limit, please download these datasets according to the link. After downloading these datasets, please move these datasets to the corresponding folder and then you can run our Jupyter code of each task. | Dataset | Link | Reference |
---|---|---|---|
USPTO_Mixed | download | https://github.com/MolecularAI/Chemformer | |
USPTO-50k | download | https://github.com/MolecularAI/Chemformer | |
ChEBI-20 | download | https://github.com/blender-nlp/MolT5 | |
Suzuki-miyaura | download | https://github.com/seokhokang/reaction_yield_nn | |
Butchward-Hariwig | download | https://github.com/seokhokang/reaction_yield_nn | |
BBBP,BACE,HIV,Tox21,Clintox | download | https://github.com/hwwang55/MolR | |
PubChem | download | https://github.com/ChemFoundationModels/ChemLLMBench/blob/main/data/name_prediction/llm_test.csv |
@misc{guo2023gpt,
title={What indeed can GPT models do in chemistry? A comprehensive benchmark on eight tasks},
author={Taicheng Guo and Kehan Guo and Bozhao Nan and Zhenwen Liang and Zhichun Guo and Nitesh V. Chawla and Olaf Wiest and Xiangliang Zhang},
year={2023},
eprint={2305.18365},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Taicheng Guo: tguo2@nd.edu
Xiangliang Zhang: xzhang33@nd.edu