Open zzohouri opened 2 years ago
Another question is why isn't there any batching done over the data? how is it supposed to work when all the extracted features from all subset classes for each increment are considered (appended to a list)? this obviously causes a huge RAM overflow, even when shiting the data on the GPU the code requires 85.G of GPU. can you please specify what hardware you have used for your experiments? there is no information or indication in the paper regarding what hardware was used, there is no batch size even specified.
So the question again comes down, how is this whole code going to ever work if all the data is appended to a list? which causes RAM overflow, or even on GPU requires 85.5GB of memory.
Thanks for sharing your code again.
Dear Ali, Thanks for sharing your code, can you please share a GPU version of your code? All over your code you use lists and np arrays, which leads to heavy RAM memory use, and so far we have not been able to run any of the experiments in your paper.
Even when we tried to run your code on GPU, the program tried to allocate 85.5 GB of memory on GPU.
Is there anything we are missing? Is there anything wrong with np.where lines or for j in indices[0] line, in getInceremntalData class?
Please help, how can we avoid heavy memory use? this while we are using caltech101 dataset. Thanks.