Closed kei0822kei closed 2 weeks ago
This isn't really a question about OpenMM-Torch. It just calculates whatever PyTorch model you give it, and the model takes however much memory it takes. In this case you created the model with TorchMD-Net, so that's where any changes would need to be made to reduce its memory use.
Thank you for your advice and sorry for asking in the wrong place.
Hi,
Thank you for maintaining great package. I want to simulate relatively larger system (~10000 atoms) using tensornet.
After I finished to train model using TensorNet-SPICE.yaml, I tried to apply this model to MD simulation for larger system using openmm-torch. When I simulated using the system composed of ~4000 atoms, 80 GiB of GPU memory has filled out. I found out when calculating force (backpropagation phase) consumed most of the GPU memory and resulted in out of memory.
Is there possible way to avoid this?
I expect calculating atomic energy using 'reporesentaion_model' (TensorNet) can be splitted into batch, and can be avoided using large GPU memory. Is it possible?