Closed libo-huang closed 1 year ago
My concrete question is: Is it an appropriate setting that uses all training data from previous tasks to calculate the accuracy of the current incremental model?
As shown in
lines 497
and502
below, theclass_means
are obtained from matrixD
&D2
, and DDE_CIL usesclass_means
to calculate the final output accuracy. However, the matrixD
&D2
are obtained from theprototypes
(line 474
), which contain all training datasets of the learned tasks and the current task. We can track that with two steps.1. Firstly, the
prototypes
inline 474
is constructed byX_train_total
as shown inline 182
below, https://github.com/JoyHuYY1412/DDE_CIL/blob/c1bb7f2794dadb02bf195318395d39b87cf50508/cifar100-class-incremental/class_incremental_cosine_cifar100.py#L180-L1822. Further, the
X_train_total
(line 138
) is obtained from the CIFAR100trainset
(line 128
), https://github.com/JoyHuYY1412/DDE_CIL/blob/c1bb7f2794dadb02bf195318395d39b87cf50508/cifar100-class-incremental/class_incremental_cosine_cifar100.py#L128-L139I hold that using all training data from previous tasks to calculate the accuracy seems to be information leakage.
Thank you for reading our codes in detail, as you must find our paper novel and interesting.
from the code segments you showed, I think those codes are copied from LUCIR (https://github.com/hshustc/CVPR19_Incremental_Learning), and I think the matrix D & D2 are used to calculate NCM accuracy. Usually, the NCM accuracy is also calculated in an ideal case and not reported as the final result.
I hold that I did not make unfair comparisons :)
Thanks for your rapid reply and your interesting causal perspective of CIL.
Another consultation is whether the proposed method's experiments with no replay data are achieved by setting the number of prototypes per class at the end
to zero.
For example, setting the parameter, --nb_protoss
, in the file cifar100-class-incremental/class_incremental_cosine_cifar100.py
to $0$ for the experiments on CIFAR100.
Thanks for your rapid reply and your interesting causal perspective of CIL.
Another consultation is whether the proposed method's experiments with no replay data are achieved by setting
the number of prototypes per class at the end
to zero. For example, setting the parameter,--nb_protoss
, in the filecifar100-class-incremental/class_incremental_cosine_cifar100.py
to 0 for the experiments on CIFAR100.
Yes, you are right.
My concrete question is: Is it an appropriate setting that uses all training data from previous tasks to calculate the accuracy of the current incremental model?
As shown in
lines 497
and502
below, theclass_means
are obtained from matrixD
&D2
, and DDE_CIL usesclass_means
to calculate the final output accuracy. However, the matrixD
&D2
are obtained from theprototypes
(line 474
), which contain all training datasets of the learned tasks and the current task. We can track that with two steps. https://github.com/JoyHuYY1412/DDE_CIL/blob/c1bb7f2794dadb02bf195318395d39b87cf50508/cifar100-class-incremental/class_incremental_cosine_cifar100.py#L474-L503I hold that using all training data from previous tasks to calculate the accuracy seems to be information leakage.