stacked on the smashpy PR, will merge that one first
add a param save_path that if specified, saves the results as each benchmark_range finishes so that if the script breaks part way through, some of the results are saved. This will also be useful when running on the MARCC cluster
Tests
unit tests
loaded results from the .npy file and displayed them, have to use np.load(file, allow_pickle=True).item() since it saves a dict or np.arrays
loaded results that had an uneven number of runs for different models, still worked fine
Changes
Tests
np.load(file, allow_pickle=True).item()
since it saves a dict or np.arraysDoc Changes