evanberkowitz / isle

Lattice Monte-Carlo for carbon nano systems
https://evanberkowitz.github.io/isle/
MIT License
3 stars 4 forks source link

Parallel Tempering #25

Open evanberkowitz opened 4 years ago

evanberkowitz commented 4 years ago

Talking with Alexei Bazavov he pointed me to two possible techniques of interest. The first is parallel tempering (first paper)

The idea is that you start two (or more) separate ensembles with different actions (for example, introducing a staggered mass, or with a different temperature, or a different U...) that I'll call A and B.

You separately evolve ensemble A according to action A and ensemble B according to action B. Then, at some point, you proposing exchanging the field configurations.

You imagine that you start with action S_A(phi_A) + S_B(phi_B) and Metropolis accept-reject the exchange according to the new action S_A(phi_B) + S_B(phi_A). You can do this with multiple ensembles, rather than just 2.

This could be implemented via MPI: the two (or more) streams are trivially parallelizable except for a once-in-a-while swap step. (Alexei points out that it's easier to MPI communicate action parameters than field configurations).

Swapping with a staggered mass ensemble (for example) would formally remove ergodicity problems. Changing T could help overcome tunneling difficulties for first-order phase transitions. However, there can be a bad overlap problem---if the two actions like field configurations that are too different you'll hardly accept.

jl-wynen commented 4 years ago

What do we need from Isle for this? It should be possible to implement this technique with what Isle already has. Just start multiple streams each with a separate HMC driver. After some steps, do the swap manually by simple swapping the evolution stages of the different streams and continue evolution. Or do you have a different solution in mind?

evanberkowitz commented 4 years ago

Yes, I can imagine this being done by alternating streams. Any idea if it's practical to do it with python-level MPI parallelization?

jl-wynen commented 4 years ago

I have no real experience with mpi4py. But since it supports sending numpy arrays, it should be pretty easy to realise. If you make sure that the different HMC drivers write to different files, there should be no need to make any Isle internals aware of MPI. Every rank just has its own copy of everything. Since temporal tempering only needs infrequent communication, that sounds just about ideal.

luutom commented 4 years ago

This might be a fun project for a student??

evanberkowitz commented 4 years ago

Agreed, but I don't think it's right for my summer student; it doesn't have a definitive "... and then we calculate $QUANTITY and get a final answer and I never have to talk to Evan again if I don't want."