Closed not-matt closed 1 year ago
Note that #20 tries to do the same.
@not-matt, regarding the performance, could you run the benchmarks and compare with https://github.com/openmm/openmm-ml/pull/20#issuecomment-1028297914
With the bench slightly modified to fix a couple of bugs (https://github.com/openmm/openmm-ml/pull/20#issuecomment-1029053808), I got 0.0031 s/step using NNPops. This is in line with the results from that PR - the implementations are pretty much identical under the hood.
Regarding the optimization keyword, we have 3 cases:
openmmml.MLPotential('ani2x', optimise=None)
openmmml.MLPotential('ani2x', optimise='cuaev')
openmmml.MLPotential('ani2x', optimise='nnpops')
I think, this will be the simplest API for a user.
Ping: @jchodera @peastman @yueyericardo @dominicrufa]
How about calling it implementation
, since that's really what you're selecting: which implementation to use. I'd also suggest that if you want the default TorchANI implementation the value would be 'torchani'
rather than None
. That will be clearer.
If no more comments, the API should look like this:
openmmml.MLPotential('ani2x', implementation='torchani')
openmmml.MLPotential('ani2x', implementation='cuaev')
openmmml.MLPotential('ani2x', implementation='nnpops') # default
@not-matt could you implement?
Yep, can do.
@raimis cuaev
is enabled like so?
self.model.aev_computer.use_cuda_extension = True
@raimis
cuaev
is enabled like so?self.model.aev_computer.use_cuda_extension = True
@yueyericardo could you confirm?
confirmed!
@not-matt any progress?
@raimis Yep just about there. Trying to test it thoroughly with each different optimisation option.
Also, it would be good to have a test to make sure things don't get broken.
Is this ready to merge?
I think, we still need some test to make sure the things keep working.
What else has to be done before we can merge this? It would be really nice if we could get it in before building the OpenMM 8.0 beta, so we can get beta users to try out the full stack including optimized kernels.
This is superseded by #35.
These changes simplify the use of NNPOps OptimisedTorchANI in OpenMM simulations, allowing ANI2x to be used with ~6x speedup on my machine. It also allows periodic boundary conditions again for CUDA (see https://github.com/aiqm/torchani/issues/607#issuecomment-1023628057)
Thanks to @jchodera for his example code https://github.com/openmm/openmm-torch/issues/59#issuecomment-1028042839 Addresses https://github.com/openmm/openmm-ml/issues/14
Demonstration https://github.com/meyresearch/ANI-Peptides/blob/main/demos/ANI_minimal.ipynb
Important to note that this PR lacks an option to choose between classic TorchANI and OptimisedTorchANI, instead enforcing NNPops as the only method. I'd be happy to implement a way to choose between the two, but I'm not sure how this would best be implemented to fit this library's structure?
eg.
or perhaps