Closed xsj1993 closed 5 years ago
You may first try a trick:
change Line 72-73 in reader.py link to
f = h5py.File(self.feature_file, 'r')
self.features = f[data_split]
The hdf5 file then works in a lazy mode. Namely, it only loads the data you are indexing into the memory (others kept in disk). This mode should use much less memory since the image features are maintained in the disk. However, the training speed will decrease, depending on the disk performance of your machine.
Dear author, I am very interested in your paper and your code. But there is one problem when I run your code. Just as you said, I use the memory which is more than 30G, to be exact, 40G. But there is not enough memory only when after 40000th iteration, about 0.5 hour, which is far from the final result. Could you tell me your memory size ,why this problem happens and what should I do. I do not want to add memory, because maybe need at least 300G to run, which is impossible for me. Thank you very much!