[x] optimizer and scheduler initialisation in checkpoints.py
[x] Tidy up observables
[x] Remove fields.py
[x] Make sure multiprocessing still works
[x] Tidy up neural neural network class, removing final activation function altogether
[x] Update sampling as done in #60
[x] Acceptance and integrated autocorrelation time should be accessible by plot/table actions
Ideally replace the collect over sampling action (see #46 for why this would be nice), but only if the replacement is actually better (discuss in future)
[x] Update layers as done in #60 but double check training still works
[x] Use more recent PyTorch version
[x] Update to use PyTorch searchsorted
[x] Update free theory test to use torch.fft.fft2
Option to specify scale in GlobalRescaling instead of letting it be a learnable parameter, or otherwise stop it from varying after some number of iterations. Generally look at rescaling in more detail (separate PR)
Implement layer-by-layer plotting (separate PR)
Bootstrap the cosh fit (separate PR)
[ ] Add docstrings
[x] Add to set of example runcards
Add functionality for aggregating over many models and plotting results (separate PR)
The code as it is on Eddie. Plenty of Easter eggs.
To do:
sigma
,interval
,s_final_activation
...)checkpoints.py
searchsorted
torch.fft.fft2
GlobalRescaling
instead of letting it be a learnable parameter, or otherwise stop it from varying after some number of iterations. Generally look at rescaling in more detail (separate PR)