lsj2408 / Transformer-M

[ICLR 2023] One Transformer Can Understand Both 2D & 3D Molecular Data (official implementation)
https://arxiv.org/abs/2210.01765
MIT License
201 stars 24 forks source link

Training on QM9 #8

Closed RobDHess closed 1 year ago

RobDHess commented 1 year ago

Hi,

Would it be possible to provide the commands for training a model on QM9 from scratch? This is mentioned in appendix B5 when investigating the effectiveness of pre-training.

Kind regards,

Rob

lsj2408 commented 1 year ago

Hi, I have released the fine-tuning code of QM9. For training the Transformer-M model from scratch, you could simply add 'export no_pretrain="true" ' before running the finetune_qm9.sh.