I believe when creating large datasets (splits), the user will always face this error in the "savez_compressed" function.
It could be re-written as:
import pickle
for idx, train_dict in enumerate(train_data):
with open(train_path + str(idx) + '.npz', 'wb') as f:
pickle.dump(train_dict, f, protocol=4)
Though, this will require rewriting other functions when training and opening .npz
So, my suggestion to adapt code to deal with large datasets.
Hello,
I believe when creating large datasets (splits), the user will always face this error in the "savez_compressed" function. It could be re-written as:
Though, this will require rewriting other functions when training and opening .npz
So, my suggestion to adapt code to deal with large datasets.
Thank you!