Closed colormeblue1013 closed 2 years ago
Hi! Glad to hear that you are interested in our work. I recently started recommending using this implementation of the SE3-Transformer instead:
They managed to speed up training of the SE(3)-Transformer by up to 21(!) times and reduced memory consumption by up to 43 times.
It is implemented for QM9 and they give a set of hyperparameters.
This is the code: https://github.com/NVIDIA/DeepLearningExamples/tree/master/DGLPyTorch/DrugDiscovery/SE3Transformer
Have fun!
Thanks, Fabian! I'll give it a try.
Hi Fabian,
could you offer the hyper-parameter set for SE(3)-Transformer and TFN to produce optimal results? I'm currently running the QM9 task and it would be really helpful for me.
Thanks.