Open davidlmobley opened 7 years ago
I'd say we should cap the relaxation testing at 1000 nsteps_neq
since updating the lambdas is slow, so anything longer than that would probably be painful to test*. I'd probably say a reasonable testing range would be [10, 50, 100, 250, 500, 1000] for nsteps_neq
.
We can also test on #48 once that's set up.
*At least until the CustomIntegrator
is improved
https://github.com/pandegroup/openmm/issues/1661
https://github.com/pandegroup/openmm/issues/1772 )
Updates to this:
We should probably also have a standard benchmark which runs varying the number of perturbation/propagation steps in a similar manner to what saltswap has done (e.g. see graph in https://github.com/pandegroup/openmm/issues/1832) so we can easily check protocols for new move types we introduce.
We really need a benchmark set or suite where we have a couple of diverse systems we can use to check performance of different move proposal schemes/integrators/etc. We want to move away from running a few small simulations locally when we change something and seeing that acceptance roughly stays the same or gets better to actually knowing EXACTLY how much different approaches impact sampling efficiency on some set of systems. We want this to end up basically push-button, so we can just run some utility on our queue and get back an assessment of the current level of performance.
Obviously, we should include toluene in lysozyme since we've done so much with this already and it's easy to figure out exactly how to analyze the data to assess efficiency (number of transitions per time, convergence of populations, etc.) But what else should be on our tests? @nathanmlim - do you think we can get your initial test system to this stage too?
And, what should we test? I'd think we'd want to normally look at each system, and then for each system try varying the amount of relaxation done over some range (how broad a range?) and look at measures of sampling efficiency.