google-research / l2p

Learning to Prompt (L2P) for Continual Learning @ CVPR22 and DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning @ ECCV22
https://arxiv.org/pdf/2112.08654.pdf
Apache License 2.0
408 stars 41 forks source link

DualPrompt: The Results without Prompts #35

Closed JHang2020 closed 1 year ago

JHang2020 commented 1 year ago

Hi~ Thanks for your excellent work! I try to reproduce the results on Spilit-CIFAR100 (DualPrompt). I am curious about how the prompts work, so I disable the prompt mechanism by setting the use-g-prompt, use-e-prompt, prompt-pool args to False and run the code, which I expect to be as low as reported in the paper (like ablation study Table 4 with almost 40% performance degradation on ImageNet-R). However, I got the surprising results, which are: Acc@1: 70.2400 Acc@5: 91.1500 Loss: 1.1950 Forgetting: 9.8889 Backward: -9.8889 These results are competitive compared with some latest works without prompts. So I wonder if I missed some important things or any potential problems exist in my experiment.

Appreciate it if you can solve my puzzles!

hailuu684 commented 7 months ago

Hello, did you get your answer about this?