Open jchodera opened 8 years ago
but always use a much shorter cutoff (e.g. 9*angstroms) when running dynamics.
You mean in the NCMC steps?
but always use a much shorter cutoff (e.g. 9*angstroms) when running dynamics.
For MCMCSampler
, let's suppose we alternate between
Suppose we want each update (configuration, state index) to leave the sampler in a "true" (x,k) state sampled from the posterior.
For updating the configuration, we could use HMC:
initial_total_energy
with cutoff_long
, equal to slightly less than half the smallest box widthcutoff_short = 9A
tuned for efficiencynsteps
using timestep
, where timestep
is tuned for high acceptancecutoff_long
and evaluate final_total_energy
delta_energy = final_total_energy - initial_total_energy
in the Metropolis criterionWe might instead choose to use a GHMC for the individual timesteps, in which case we would do this:
initial_potential_long
for cutoff_long
cutoff_short = 9A
tuned for efficiencycutoff_short
and evaluate initial_potential_short
nsteps
using timestep
, where timestep
is tuned for high GHMC acceptancefinal_potential_short
for cutoff_short
cutoff_long
and evaluate final_potential_long
delta_energy = (initial_potential_short - initial_potential_long) + (final_potential_long - final_potential_short)
in the Metropolis criterion, discarding the whole sequence of GHMC steps if this is not acceptedOk, thanks!
This might have interactions with #77
Maybe we can have a quick huddle on Monday to figure out what the best way to handle this is (or whether we should punt on it for the first paper)?
There are several options to consider:
I am inclined to prefer those options in the order 3, 2, 1. My suspicion is that the slowdown might be overwhelmed by the slowness of other components, so it's worth at least taking a look at that option, since it's simple.
Barring that, I think we should try 2, because presumably the reference calculations will have this correction, and I don't think we want to present something that is systematically off.
We spoke about this in person, and decided we might punt on this until a subsequent paper. We can us the same cutoff (e.g. 9A) for both relative FEP and Perses in the first paper.
@jchodera what should we do about this issue, since we are reweighting to the non-alchemical endpoints anyway?
We just need to enlarge the nonbonded cutoff for the non-alchemical endpoints. We can enlarge to a little less than half the box size.
This is correctly handled by unsampled_endstates
in our sampler (though omitted the Folding@home calculations we use right now).
@dominicrufa @zhang-ivy : Do you know if the nonequilibrium switching also expands the cutoff at the equilibrium endpoints to include in estimating the reweighted free energies?
Do you know if the nonequilibrium switching also expands the cutoff at the equilibrium endpoints to include in estimating the reweighted free energies?
It doesn't. I'll need @jchodera and @dominicrufa 's input on how to implement this
Since we haven't been using the dask distributed version much, the best approach for us would be to
openmmtools
to add a NonequilibriumSwitchingSampler
, which just simulates equilibrium endstates and collects work values connecting them for use in MBAR. This will be very simple compared to the other samplers.HybridCompatibilityMixin
to extend the NonequilibriumSwitchingSampler
in a few lines, as with ReplicaExchangeSampler
and SAMSSampler
.
I just realized we could include the anisotropic long-range dispersion correction in a very simple and inexpensive way via HMC: When we run the dynamics part of MCMC for the
MCMCSampler
, we always evaluate the reduced potential we want to sample from using a very large cutoff (e.g. just slightly smaller than half the smallest box dimension) when using explicit solvent, but always use a much shorter cutoff (e.g.9*angstroms
) when running dynamics. We can then accept/reject with HMC.