Open j-wags opened 2 years ago
@j-wags : Do you need to exactly reproduce what was done previously if it's easy to consistently benchmark all generations of force field releases (and pre-releases) in parallel with a slightly different, more modern version of the codebase?
Raw notes from story review, shared here for visibility:
fah-alchemy
, do we need to be able to call multiple openff-toolkit
versions? These would each need their own associated environment, whether baked into e.g. a conda env or a Docker image; must cross process boundaries for thisopenff-toolkit
version not just stored in provenance in resulting calculations
In broad terms, what are you trying to do?
Automate a pipeline that's as similar as possible to @dfhahn's earlier benchmarks (section 2.2.3 of this link). Running protein-ligand free energy benchmarks are a requirement (or at least a very important "should" priority) for the release of Rosemary. Having a rapid and low-cost way to do this would greatly reduce the friction of testing new parameters and proposed FF modifications.
How do you believe using this project would help you to do this?
I envision that this project would provide an easy interface to ingest:
Ideally, the structure inputs could be taken directly from https://github.com/openforcefield/protein-ligand-benchmark
What problems do you anticipate with using this project to achieve the above?
The PMX codebase is under continuous development and is somewhat fragmented, so it would be helpful to figure out pinning strategies to ensure that use of this interface really does just capture the change in FF performance, and not underlying changes in methodology.
Preparing protein structures for simulation is complex both technically and scientifically. The workflows that perform this preparation will likely undergo further development, and so a similar pinning strategy will be important to isolate the calculation results from changes other than those in the force fields.