Open evanberkowitz opened 1 year ago
Before deciding I should investigate the guarantees about hardware dependence etc. of the random seed. Is it always the same everywhere?
OF COURSE: the rng state is different on the CPU and GPU. So that's a headache! Makes chip independence quite annoying...
On the GPU
state = torch.cuda.get_rng_state(device)
print(state.shape)
print(state.dtype)
torch.Size([816]) torch.uint8
while on the CPU
cpu_state = torch.get_rng_state()
print(cpu_state.shape)
print(cpu_state.dtype)
torch.Size([5056]) torch.uint8
In principle it should be saved so that when we continue from an ensemble we have meaningfully one set of random numbers.
We can use
torch.{get,set}_rng_state
to get/set it. Probably this should be done somehow inside the MCMC. Should I save the state every trajectory / mcmc step? Is that overkill? The state seems to be 5056 uint8s = 40kb. That's 10x smaller than an 11^2*32 complex-double valued configuration.So maybe storing it in an array and writing it out like an observable (one per configuration) is sensible?