Open aliciatang07 opened 4 years ago
I stored all the embeddings of the dataset in memory for semi-supervised learning on it. You can append the embeddings in a file instead of storing it as numpy array and read iteratively from it if it's not fitting in your memory.
Thanks! I used the hdf5 to store it and it works. but when I use trained model to run semisupervise on cifar-10, it got error rate 0.5303, which is super high. What should be the expected error rate and do you have any idea what the problem might be.
I got this error when running the test file Traceback (most recent call last): File "test_semisup.py", line 102, in
all_embeddings = np.concatenate(all_embeddings, axis=0)
File "<__array_function__ internals>", line 6, in concatenate
MemoryError: Unable to allocate 1.05 GiB for an array with shape (50000, 5632) and data type float32
Any idea how to solve it?