Closed RaulPPelaez closed 1 year ago
Torchani cannot be installed with pytorch2, which forces to skip some tests. EDIT: Torchani made a new torch2 compatible release
All tests pass and the ci works for an installation with pytorch2. I believe this should be merged now and work on compile() compatibility be done in another PR. A new release could be done now so that users can install NNPOps along pytorch2.
In case you have some experience with torch2 compile: This test miserably fails in CUDA mode:
@pytest.mark.parametrize('device', ['cpu', 'cuda'])
@pytest.mark.parametrize('dtype', [pt.float32, pt.float64])
def test_torch_compile_compatible(device, dtype):
class ForceModule(pt.nn.Module):
def forward(self, positions):
neighbors, deltas, distances = getNeighborPairs(positions, cutoff=1.0)
mask = pt.isnan(distances)
distances = distances[~mask]
return pt.sum(distances**2)
original_model = ForceModule()
num_atoms=10
positions = (20 * pt.randn((num_atoms, 3), device=device, dtype=dtype)) - 10
original_model(positions)
model = pt.compile(original_model)
model(positions)
It yields a really verbose error about something called FakeTensor that makes the most obscure gcc recursive template error look clear and informative:
I have not been able to solve this, from what I have gathered this should not happen and it is a bug in torch (there are a lot of issues describing stuff like this: https://github.com/pytorch/pytorch/issues/96742 https://github.com/pytorch/pytorch/issues/95791
Yes, we can skip the compile feature for now.
Ok I think this is done now.
@RaulPPelaez can I merge?
Yes, thanks. @raimis
This PR is to start working on making NNPOps compatible with pytorch 2.0 and torch.compile