openmm / openmm-ml

High level API for using machine learning models in OpenMM simulations
Other
76 stars 25 forks source link

Implement NNPOps Optimised ANI #21

Closed not-matt closed 1 year ago

not-matt commented 2 years ago

These changes simplify the use of NNPOps OptimisedTorchANI in OpenMM simulations, allowing ANI2x to be used with ~6x speedup on my machine. It also allows periodic boundary conditions again for CUDA (see https://github.com/aiqm/torchani/issues/607#issuecomment-1023628057)

Thanks to @jchodera for his example code https://github.com/openmm/openmm-torch/issues/59#issuecomment-1028042839 Addresses https://github.com/openmm/openmm-ml/issues/14

Demonstration https://github.com/meyresearch/ANI-Peptides/blob/main/demos/ANI_minimal.ipynb

Important to note that this PR lacks an option to choose between classic TorchANI and OptimisedTorchANI, instead enforcing NNPops as the only method. I'd be happy to implement a way to choose between the two, but I'm not sure how this would best be implemented to fit this library's structure?

eg.

openmmml.MLPotential('ani2x_optimised')

or perhaps

openmmml.MLPotential('ani2x', optimised=True)
raimis commented 2 years ago

Note that #20 tries to do the same.

raimis commented 2 years ago

@not-matt, regarding the performance, could you run the benchmarks and compare with https://github.com/openmm/openmm-ml/pull/20#issuecomment-1028297914

not-matt commented 2 years ago

With the bench slightly modified to fix a couple of bugs (https://github.com/openmm/openmm-ml/pull/20#issuecomment-1029053808), I got 0.0031 s/step using NNPops. This is in line with the results from that PR - the implementations are pretty much identical under the hood.

raimis commented 2 years ago

Regarding the optimization keyword, we have 3 cases:

I think, this will be the simplest API for a user.

Ping: @jchodera @peastman @yueyericardo @dominicrufa]

peastman commented 2 years ago

How about calling it implementation, since that's really what you're selecting: which implementation to use. I'd also suggest that if you want the default TorchANI implementation the value would be 'torchani' rather than None. That will be clearer.

raimis commented 2 years ago

If no more comments, the API should look like this:

openmmml.MLPotential('ani2x', implementation='torchani')
openmmml.MLPotential('ani2x', implementation='cuaev')
openmmml.MLPotential('ani2x', implementation='nnpops') # default

@not-matt could you implement?

not-matt commented 2 years ago

Yep, can do.

@raimis cuaev is enabled like so?

self.model.aev_computer.use_cuda_extension = True
raimis commented 2 years ago

@raimis cuaev is enabled like so?

self.model.aev_computer.use_cuda_extension = True

@yueyericardo could you confirm?

yueyericardo commented 2 years ago

confirmed!

raimis commented 2 years ago

@not-matt any progress?

not-matt commented 2 years ago

@raimis Yep just about there. Trying to test it thoroughly with each different optimisation option.

raimis commented 2 years ago

Also, it would be good to have a test to make sure things don't get broken.

peastman commented 2 years ago

Is this ready to merge?

raimis commented 2 years ago

I think, we still need some test to make sure the things keep working.

peastman commented 2 years ago

What else has to be done before we can merge this? It would be really nice if we could get it in before building the OpenMM 8.0 beta, so we can get beta users to try out the full stack including optimized kernels.

peastman commented 1 year ago

This is superseded by #35.