naver-ai / dap-cl

Official code of "Generating Instance-level Prompts for Rehearsal-free Continual Learning (ICCV 2023)"
Other
40 stars 2 forks source link

problem about the Evaluation Metrics #4

Open myscius opened 9 months ago

myscius commented 9 months ago

hello! i'd like to ask you , how you calculate the average accuracy in your paper? The refered paper "Discovering causal signals in images." seem not descripe this. And i run your code , get a result higer than you reported in your paper. Besides, the upper-bound is also not as what you reported in the paper.

whitesnowdrop commented 9 months ago

Hi, I've summarized information about the evaluation metrics in the appendix. Please check it out. I used the average accuracy commonly employed in continual learning. Also, the results I reported are the mean of several runs, thus it may be lower than what you obtained. Additionally, the upper-bound performance is obtained through a hyperparameter search tailored to each respective benchmark.

myscius commented 9 months ago

thanks, i have got your appendix!