openmm / openmm-torch

OpenMM plugin to define forces with neural networks
185 stars 24 forks source link

Simulating large system using TensorNet #159

Closed kei0822kei closed 2 weeks ago

kei0822kei commented 2 weeks ago

Hi,

Thank you for maintaining great package. I want to simulate relatively larger system (~10000 atoms) using tensornet.

After I finished to train model using TensorNet-SPICE.yaml, I tried to apply this model to MD simulation for larger system using openmm-torch. When I simulated using the system composed of ~4000 atoms, 80 GiB of GPU memory has filled out. I found out when calculating force (backpropagation phase) consumed most of the GPU memory and resulted in out of memory.

Is there possible way to avoid this?

I expect calculating atomic energy using 'reporesentaion_model' (TensorNet) can be splitted into batch, and can be avoided using large GPU memory. Is it possible?

peastman commented 2 weeks ago

This isn't really a question about OpenMM-Torch. It just calculates whatever PyTorch model you give it, and the model takes however much memory it takes. In this case you created the model with TorchMD-Net, so that's where any changes would need to be made to reduce its memory use.

kei0822kei commented 2 weeks ago

Thank you for your advice and sorry for asking in the wrong place.