Closed Sreeni1204 closed 1 year ago
Hi @Sreeni1204
Please kindly try for "random" and check whether the data path is correct. Seems the uncertainty sampler can not choose any candidates.
Hello @swagshaw
If you are referring to the memory management, then it works fine with "random".
The following commands are running successfully on the same dataset, but the "uncertainity" is failing as per previous comment.
# Finetune
python main.py --dataset TAU-ASC --mode finetune
# Random
python main.py --dataset TAU-ASC --mode replay --mem_manage random
# Reservoir
python main.py --dataset TAU-ASC --mode replay --mem_manage reservoir
# Prototype
python main.py --dataset TAU-ASC --mode replay --mem_manage prototype
I am using the ESC-50 dataset
Hello @swagshaw
I read the paper completely again and found this information
To simulate the condition of edge devices, we set the max amount of examples as 500, 100 samples in DCASE 19 Task 1 and ESC-50 due to the memory limitation.
So, I updated the --memory_size
to 100.
Let me know if this is correct.
Yes. It is correct. The ESC50 has more classes so each class has a smaller memory size. I may forget to point out it in my README.
Thank you for the information.
Also, if I am training on my custom dataset, how do I calculate the memory size? Is there any algorithm you have used to calculate the memory size for ESC-50 dataset or DCASE dataset?
Thank you for the information.
Also, if I am training on my custom dataset, how do I calculate the memory size? Is there any algorithm you have used to calculate the memory size for ESC-50 dataset or DCASE dataset?
Basically, when you are training on your custom dataset, you need to consider the size of your dataset. Briefly talking, the memory size should confirm each class have enough samples in the memory buffer while not above the samples for one single class. If we have a small dataset, 100 samples for 'first', 200 samples for 'second', and the rest 300 samples for 'third', then the memory size could be 10% of the whole dataset, like 60 (each class has 20 samples). But note you better not use memory size more than 100 here, because when you select the candidates for 'first', it will more than it number of samples and may raise the error.
Hope this could help you. :)
Thanks a lot for the clarification. This helps a lot, will close this issue for now and create a new one in case if I get into some trouble.
Please let me know what to be done here:
I am using the ESC50 dataset and the train.sh file for reproducing the results that you have published. I have not changed any configurations except for batch size.