carstenbauer / MonteCarlo.jl

Classical and quantum Monte Carlo simulations in Julia
https://carstenbauer.github.io/MonteCarlo.jl/dev/
Other
185 stars 18 forks source link

DQMC Measurements underestimate errors #143

Closed ffreyer closed 2 years ago

ffreyer commented 2 years ago

Currently most do an average over sites before pushing to the LogBinner. They should probably push once per source site instead...

ffreyer commented 2 years ago

I ran a bunch of tests with fake/generated correlated data, testing

  1. binning of all values in one LogBinner
  2. binning values from different time series in individual LogBinners
  3. binning the mean of each value in one LogBinner

Test 1

I created a generator that spits out the same value N times, then generates a new value. This essentially gives us blocks of N perfectly correlated measurements. Using one of those with a block size of 32, I binned all values (1) and the average of 100 values (3). The standard error is statistically the same.

I repeated this with additive and multiplicative rand() noise, no significant changes.

Test 2

Using two generators with different blocksizes, I binned their values individually (2), their averages (3) and both their values (1). The result is an exact match between averaging values from both generators (3) and pushing every value (1). The independent binners had larger errors and the result from Gaussian error propagation was larger yet again.

I ran this test with blocksizes (32, 16) and (32, 23) without noise and with additive, multiplicative and mixed rand() noise with blocksizes (32, 23). No significant differences ((3) and (1) remain exactly the same)

Test 3

To get a closer model of DQMC measurements I used 16 generators with random block sizes between 20 and 40, each with random additive noise. The measurement in this case is G_i * G_{i+1} + G_{i-1} * G_i where i runs from 1 to 16. I recorded values for every i (1) and their mean (3). The errors remain equal up to float precision.

I repeated this test with values generated as 0.5 .+ [sin(0.1i + j) for j in 1:16] .+ rand(16) getting the same result.

My conclusion is that no, we are not underestimating errors.