There they use "the Γ method" to compute the autocorrelation time of many different observables. Looking at an example markov chain
we can see that the fermionic N (2nd row, orange) is slowest of the examined primary observables, with an autocorrelation on the scale of 50-100 MC steps. Presumably if we handle the autocorrelation there, all autocorrelation will be handled.
A straightforward single-observable implementation of Γ and the autocorrelation time is quite slow.
def autocorrelation(series,tmax=None ,w=None, mean=None):
# series is the field that holds the time-ordered data
# tmax is the maximum autocorrelation time to measure
#
# by Tom Luu
dim = len(series)
if tmax is None:
tmax = dim
if tmax>dim:
raise ValueError("Requested tmax is larger then length of series")
C = torch.zeros(tmax)
if mean is None:
mu = torch.mean(series)
else:
mu = mean
for tt in range(tmax):
C[tt] = torch.mean(torch.tensor([(series[i]-mu)*(series[i+tt]-mu) for i in range(dim-tt)]))
C = C.clone()/C[0] # normalize
tau = [.5*C[0]]
if w is None:
w = range(dim//2)
for tt in w: # this expression assumes that C[t] is symmetric about t=0
if C[tt] <= 0: # at first zero crossing, stop the calculation of tau
break
tau.append(tau[-1]+C[tt])
tau = torch.from_numpy(np.array(tau))
# Returns two arrays, the first is the autocorrelation function,
# The second is the evaulation of the tau as a function of window size
# Use the last value of the array in tau (tau[-1]) as the estimate for the autocorrelation time
# Compare exp(-t/tau[-1]) with the autocorrelation function
return C,tau
If we impose (totally nonsensical) periodic boundary conditions then we can accelerate the calculation rapidly, noting that it's a convolution,
def autocorrelation(a, mean=None):
if mean is None:
mean = a.mean()
b = a - mean
print(b.shape)
plus = torch.fft.fft(b, norm='backward')
minus= torch.fft.ifft(b, norm='forward')
C = torch.fft.fft(plus*minus, norm='backward').real / (len(b))**2
C = C.clone() / C[0]
clamped = torch.clamp(C, 0, None)
minIdx = torch.argmin(clamped, dim=0)
return C, C[:minIdx].sum()
The autocorrelation functions track one another
but are not in perfect agreement (because of the pbcs). In a few examples I've studied it's seemed that the autocorrelation time for the Γ method is about 1.5 times as big as the autocorrelation time for the FFT method.
Questions
[ ] Can this be understood? Made precise?
[ ] Is there another choice of FFTs that reproduce the Γ method exactly?
Strided HMC
We can save on disk space and measurement time in the long run if we only save every nth configuration (where n is approximately the autocorrelation time). Then the autocorrelation time will be much closer to 0.
[ ] What is a good interface for keeping only every nth configuration? Should it be specified by GrandCanonical.generate? As part of the HMC construction itself?
It's pretty common practice to do evaluate a Markov chain's autocorrelation time. See, for example Monte Carlo errors with less errors.
There they use "the Γ method" to compute the autocorrelation time of many different observables. Looking at an example markov chain
we can see that the fermionic N (2nd row, orange) is slowest of the examined primary observables, with an autocorrelation on the scale of 50-100 MC steps. Presumably if we handle the autocorrelation there, all autocorrelation will be handled.
A straightforward single-observable implementation of Γ and the autocorrelation time is quite slow.
If we impose (totally nonsensical) periodic boundary conditions then we can accelerate the calculation rapidly, noting that it's a convolution,
The autocorrelation functions track one another
but are not in perfect agreement (because of the pbcs). In a few examples I've studied it's seemed that the autocorrelation time for the Γ method is about 1.5 times as big as the autocorrelation time for the FFT method.
Questions
Strided HMC
We can save on disk space and measurement time in the long run if we only save every nth configuration (where n is approximately the autocorrelation time). Then the autocorrelation time will be much closer to 0.
GrandCanonical.generate
? As part of the HMC construction itself?