Open dmighty007 opened 6 months ago
Please let me know if you require any more information.
I believe this is an error with your model. In particular because lines like this:
pos = positions[::4].to("cpu")
...
y = self.encoder(x)[:,1].sum()
Pytorch does not like when you run backwards only on a subset of the output. To test this try to run backwards on the model alone (no openmm or openmm-torch involved). My guess is you will see a similar error. Something like this:
import torch
pos = torch.rand(10, 3)
box = torch.eye(3) * 10
model = torch.jit.load("model.pt")
pos.requires_grad_()
y = model(pos, box)
y.backward() # compute gradients
print(pos.grad)
As a side note, the box is already passed to your model as a 3x3 pytorch tensor, you should not need to convert it. You can extract its diagonal with "box.diag()"
Thanks! I got your point. It does returns a NoneType. It makes sense. I'll try to modify the model to operable on whole system. But are there any trick to do certain operation on subset of the positions? Say I want only the oxygen atom of my system!
Also I did noticed the box.diag()
in the documentation, but was lazy to change :).
You want your TorchForce to act only on a subset of the system? As an easy workaround you can just multiply by zero the ones you do not want, right?
Yes. Thanks! I'll try that.
I am trying to use TorchForce to bias a simulation(box-full of waters). The torch model that calculates the CV looks for nearest neighbors of reference water molecule (within cutoff) then calculates pairwise distance between them, this is my feature. Now when adding this jitted model to openmm system it throws OpenMMException. Probably the issue is regarding grad of the tensor I'm returning from my TorchForce model.
The model used in TorchForce:
The openmm simulation(with MetaD):
The error it throws:
My conda environment:
conda list:
Used mamba to install openmm-torch. I suppose this is a question and not likely a bug. It will be really helpful you can find where am I making mistake!