Importantly save and load the RNG state of python/numpy/torch as binary files. This saves time as stated in #64.
We use .npy for python and numpy seed keys which works well across versions and we ensure no pickle injections.
We use .pt for torch/CUDA rng and also disable arbitrary pickle injections.
Removes the need for optimizers to know about random state, the neps.runtime manages it for them
Drops the shared state polling time for the lock from 1 second to 0.1 second. I imagine this was previously high due to issues like the 16 ==15 issue in #42 but I do not know of a concrete reason to have it as high as 1 second.
Impact
With the time.sleep(2) in the neps_examples/basic_usage/hyperparameters.py removed, this change resulted in the time going from 9.3 seconds to 3.9 seconds on my machine. Half of the program duration was spent just serializing and dersializing random state
I'm hoping this also halves the time taken to run the tests, meaning we could just run them all locally instead of having to deal with marked tests.
This is the test file which previously there was no test that serialization actually worked as intended:
This PR does three things:
.npy
for python and numpy seed keys which works well across versions and we ensure no pickle injections..pt
for torch/CUDA rng and also disable arbitrary pickle injections.neps.runtime
manages it for them16 ==15
issue in #42 but I do not know of a concrete reason to have it as high as 1 second.Impact
With the
time.sleep(2)
in theneps_examples/basic_usage/hyperparameters.py
removed, this change resulted in the time going from9.3
seconds to3.9
seconds on my machine. Half of the program duration was spent just serializing and dersializing random stateI'm hoping this also halves the time taken to run the tests, meaning we could just run them all locally instead of having to deal with marked tests.
This is the test file which previously there was no test that serialization actually worked as intended: