Closed carlosbornes closed 4 months ago
Dear Carlos,
we have not benchmarked the performance of SchNetPack running on several GPUs. Nevertheless, you can run SchNetPack on multiple GPUs in parallel by simply adapting some arguments of the Pytorch-Lightning Trainer.
Best, Jonas
Thanks, I will try to do it @jnsLs
Dear Carlos,
we have not benchmarked the performance of SchNetPack running on several GPUs. Nevertheless, you can run SchNetPack on multiple GPUs in parallel by simply adapting some arguments of the Pytorch-Lightning Trainer.
Best, Jonas
Inference on multi GPUs with lammps is not usable when I want to run a large system which single GPU will be OOM
Dear Carlos, we have not benchmarked the performance of SchNetPack running on several GPUs. Nevertheless, you can run SchNetPack on multiple GPUs in parallel by simply adapting some arguments of the Pytorch-Lightning Trainer. Best, Jonas
Inference on multi GPUs with lammps is not usable when I want to run a large system which single GPU will be OOM
If there any method to overwhelming this?
Thats right at the current status our LAMMPS interface does not support the use of multiple GPUs in parallel. However, the training does. If you want to run MD on multiple GPUs you could use the SchNetPackMD package. Slight adaptations in the code might be needed to make it run on multiple GPUs.
Thats right at the current status our LAMMPS interface does not support the use of multiple GPUs in parallel. However, the training does. If you want to run MD on multiple GPUs you could use the SchNetPackMD package. Slight adaptations in the code might be needed to make it run on multiple GPUs.
OK,thanks
Dear devs,
Have you benchmarked the behaviour of training a model/running an MD in parallel on GPUs? I found some discussion here but the discussion seems to be very mixed or from old issues mentioning that in the future Schnet should run in parallel with Pytorch Lightning.
I'm currently writing an HPC application and would like to reference something related to the ability (or inability) of Schnetpack to train and run in parallel on a GPU
Best Carlos