openmm / NNPOps

High-performance operations for neural network potentials
Other
81 stars 17 forks source link

NNPOps 0.4 #86

Closed raimis closed 1 year ago

raimis commented 1 year ago

It is time for the next release with:

Especially the PBC fix (#83). Does anybody working on something and wants to include it?

Ping: @peastman @RaulPPelaez @sef43

peastman commented 1 year ago

Sounds good to me.

RaulPPelaez commented 1 year ago

I would like to test this https://github.com/openmm/NNPOps/pull/80 a bit more. For this https://github.com/openmm/NNPOps/issues/84 I say we jump straight to torch 2.0, probably solving 1.13 in the process. Lets make 0.4 now.

sef43 commented 1 year ago

yep go ahead!

raimis commented 1 year ago

A release is created: https://github.com/openmm/NNPOps/releases/tag/v0.4

Now, let's see if conda-forge is going to pick it up automatically.

raimis commented 1 year ago

https://github.com/conda-forge/nnpops-feedstock/pull/16

raimis commented 1 year ago

The package builds fail because of a tolerance issue with the new tests.

sef43 commented 1 year ago

yeah it is failing the tolerance test (which pass on the github actions CI which run on this repo) grad_error = 0.0074 fails in the assert(grad_error < 7e-3). I can reduced the tolerance to assert(grad_error < 8e-3) or even 1e-2. These seem like reasonably loose tolerances to me, are these what we expect from float32 precision gradient calculations? @raimis @peastman

raimis commented 1 year ago

I'm working on that.

raimis commented 1 year ago

Packages are available: https://anaconda.org/conda-forge/nnpops/files?version=0.4

raimis commented 1 year ago

@jchodera could you tweet about the release (https://github.com/openmm/NNPOps/releases/tag/v0.4)

jchodera commented 1 year ago

Done! https://twitter.com/openmm_toolkit/status/1632949245316845569

raimis commented 1 year ago

Done!