FabianFuchsML / se3-transformer-public

code for the SE3 Transformers paper: https://arxiv.org/abs/2006.10503
475 stars 69 forks source link

Optimal parameter set for QM9 task? #26

Closed colormeblue1013 closed 2 years ago

colormeblue1013 commented 2 years ago

Hi Fabian,

could you offer the hyper-parameter set for SE(3)-Transformer and TFN to produce optimal results? I'm currently running the QM9 task and it would be really helpful for me.

Thanks.

FabianFuchsML commented 2 years ago

Hi! Glad to hear that you are interested in our work. I recently started recommending using this implementation of the SE3-Transformer instead:

https://developer.nvidia.com/blog/accelerating-se3-transformers-training-using-an-nvidia-open-source-model-implementation/

They managed to speed up training of the SE(3)-Transformer by up to 21(!) times and reduced memory consumption by up to 43 times.

It is implemented for QM9 and they give a set of hyperparameters.

This is the code: https://github.com/NVIDIA/DeepLearningExamples/tree/master/DGLPyTorch/DrugDiscovery/SE3Transformer

Have fun!

colormeblue1013 commented 2 years ago

Thanks, Fabian! I'll give it a try.