Open vicmax opened 1 year ago
Please just follow the following script to train WSN on the tinyImageNet dataset. ./scripts/wsn/wsn_tiny_image.sh I would clean the source code more in near future.
Bests,
@ihaeyong
Hi Haeyong,
I carefully read the code of FS-DGPM, which also conducted experiments on tiny-ImageNet. In their code, I found they just used one classifier with output=200.
However, to take the task-aware evaluation, they applied array slicing for the prediction within a certain task (betweenoffset1
and offset2
, e.g., offset1=5
and offset2=10
for the second task) and refill the outputs from other tasks with zero (see the implementation of model(x,t)
from the code here).
By doing this, FS-DGPM is equivalent to have 40 of 5-way classifiers.
Thus, I guess there is a small mistake in your code as you set 40 of 200-way classifiers and did not apply any array slicing. And I wonder if the prediction performance of WSN will be further improved by constraining the prediction within 5 logits?
Hi,
Thanks for continuing to update the released code!
When I read the paper and the relevant code from this repo, I have several questions about the setting of tinyImagenet experiments:
1. Which setting were the experiments on tiny-ImageNet conducted?
I went through your paper and did not find any descriptions on this point. I printed the model architecture of tiny-Imagenet experiments and found that each classification head has an output of
200
. Based on my understanding, shouldn't they be 40 classifiers withoutput=5
in each classifier?2. Class incremental loader for tiny-Imagenet I saw that the data loader of tiny-Imagenet was built with
loader_type='class_incremental_loader'
. Even though with "class incremental setting", shouldn't the 40 classifiers have output like this(0) output=5; (1) output=10; (2) output=15;...; (39) output=200
?Sorry, I am a freshman in the field of Continual Learning. Looking forward to getting any replies.
Best,