katherineding / dqmc-dev

4 stars 0 forks source link

[correctness] Implement global moves #16

Open katherineding opened 1 year ago

katherineding commented 1 year ago

See 1 and email discussion with E. W. Huang: need non-local updates at half filling, large U, low temperature. Otherwise we are not ergodically exploring the phase space, and there might be "sticking" behavior of occupancies. QUEST apparently implements this already -- see 2

katherineding commented 1 year ago

Update from @wenowang96 on 3/26/2023:

Here I show a few plots for nup(red) and ndown(blue) as a function of sweeps(x-axis). every upper panel is for average value for all measurements before that sweep, and every lower panel shows the average value for 1000 measurements(10*1000/L sweeps = 250 sweeps for beta=2, 25 sweeps for beta=20). For lower temperatures I stopped at a smaller number of sweep because it is very time consuming. (I can also do more but I am very lazy.) Every simulation has 10 warm up sweeps and at most 10^6 measurement sweeps, with period_eqlt=10. All of these are on a 8 by 8 lattice.

At high temperature (beta=2, U=8), you can see nup and ndown converges to be the same quite quickly, and the lower panel shows that their value oscillates between 0.5. No long ''sticking'' effect is observed. But for low temperature and large U (beta=20, U=10 and beta=12, U=12), there is clearly a ''sticking'' effect. For the lower panels, nup can be ''sticking'' to either a positive or negative value at a length of ~ 10^4 sweeps, which result in that may stick to a value these sweeps.

I think this is consistent with Richard's observation since they typically found the sticking problem for bins with 1000 measurement sweeps. We also see this problem for a scale of 10^4 sweeps, as we go to these lower temperatures ~ beta=20.

But since we do typically 10^6 measurement sweeps for each bin, and (I think more importantly) multiple independent Markov chains, I think this ergodicity is not a concern. I think this ergodicity problem should be addressed as number of measurement sweeps -> infinity as the phase space is fully accessed. It is somehow like we need larger number of sweeps than yje autocorrelation length for specific measurements. In this case, we know that nup+ndown needs a small number but nup-ndown needs needs much larger. But as long as our number of measurement sweeps are much larger than 10^4 sweeps, and have independent Markov chains, we should also be fine for nup-ndown.

As he said in the paper, 'Naturally, being certain that ergodicity is satisfied in any simulation is a highly nontrivial issue, and it is possible that the simulations are still stuck in a limited region of phase space and that further moves are required. We can only assert that the obvious signatures of a lack of equilibration have been eliminated here.' I think implementing particularly what he proposed in the paper is not that necessary since it does not guarantee it will be better for all measurements. But a take-home message may be that we need to be careful about cases where we see very small error bar, but that is because we might do not have enough sweeps/measurements.

Screenshot 2023-03-26 at 7 02 49 PM Screenshot 2023-03-26 at 7 07 35 PM Screenshot 2023-03-26 at 7 07 50 PM
katherineding commented 1 year ago

Related to #14

katherineding commented 1 year ago

Potentially relevant cluster update method -- https://journals.aps.org/pre/abstract/10.1103/PhysRevE.57.4976 Mentioned in https://www.nature.com/articles/s41467-022-34683-x