Closed meschw04 closed 2 years ago
Thanks for submitting this example.
This is something, that is probably best handled by the MD engines themselves. Maybe we integrate something at some point for these iterations, but PySAGES gives the control for longer simulations to the MD engines. So longer runs have to be terminated by the MD engine. HOOMD-blue offers the HOOMD_WALLTIME flag, which raises an exception of the runtime is exceeded. I am not sure if OpenMM offers something similar.
The long term plan is more to have the iteration flag you mentioned parallized over multiple GPUs.
Also estimates of how long a given simulation will run are handle by the engines. HOOMD-blue prints this out by default, OpenMM has state reporter that can report the estimated run time.
At some point, we also want to implement a functionality, that long runs can be checkpointed and restarted. I would say we postpone these great ideas until then.
I just noticed, that you explictly request the CPU platform. Does that have an impact on your runtime experience?
I took this issue as inspiration for a google colab that runs a Harmonic Bias simulation with OpenMM. It will soon be part of the PySAGES tutorials. Not that if you run it in CPU mode the actual simulation cell takes about 12 minutes, but using the GPU as accelerator the same cell takes less then 30 seconds. OpenMM seems to be very sensitive to using the GPU. (Other the HOOMD, which is more tolerant with CPU for small examples.
Hello! Thanks so much for the answers in issue #99, gonna go ahead and close that issue for the time being (I ended up using
daiquiri
with the OpenMM example, happy to share if that'd be helpful to you all). I also took @InnocentBug's suggestion to try umbrella sampling with the ADP example in OpenMM. I wrote the code shown below. My understanding is that this should run five umbrellas in OpenMM over 1e5 time steps (after an initial burn-in), then use WHAM to stitch these together to provide theA
matrix. Looking at some other examples, the constants I set below in terms of the torsional angles and the umbrellak
constant all seem reasonable. If I run just a single umbrella in OpenMM without using pysages (by adding, for eg,bias_torsion_phi = CustomTorsionForce("0.5*k_phi*dtheta^2; dtheta = min(tmp, 2*pi-tmp); tmp = abs(theta - phi)")
), and all the exact same code as below but without pysages, then the simulation completes in, like, 4 seconds.I started the script below running on a single core yesterday morning, and it finished this morning. I'm confused as to what is taking such a high computational overhead. I have tried changing the
k
values, start/end locations of (psi,phi), num_umbrellas, etc., then I run for an hour before killing it. It really shouldn't take an hour, right? It should take, what, ~30 sec?On the implementation side of things (and I'd be happy to help with this), I think it would be really nice to have one or some of the following:
for
loop for the set of umbrellas to give an estimate of how long it will run for (see https://github.com/SSAGESLabs/PySAGES/blob/192e7f9af6fdb50329d2e8fea095537b39a1fc12/pysages/methods/umbrella_integration.py#L97)Thanks so much! :smile: