google-research / l2p

Learning to Prompt (L2P) for Continual Learning @ CVPR22 and DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning @ ECCV22
https://arxiv.org/pdf/2112.08654.pdf
Apache License 2.0
408 stars 41 forks source link

Comparison with few/zero-shot performance and Task Specific prompts. #17

Closed prateeky2806 closed 1 year ago

prateeky2806 commented 2 years ago

Hi, thank you for your great work. I was wondering if you have done any of the following experiments.

  1. Have you evaluated the few-shot and zero-shot performance of the base ViT model on the CIFAR100 dataset?
  2. Tuned Task specific prompts for the based ViT model used for classification? I feel this is a crucial number to compare with L2p.

Thanks!

KingSpencer commented 2 years ago

Hi Prateek,

Thanks for your interest in our work and your great questions!

  1. I did not evaluate the standard few-shot and zero-shot setting for the base ViT model, since they are not directly related to continual learning. However, one can treat one of the baselines -- GDumb -- as a few-shot learning method. To my understanding, GDumb trains on the buffered data only, which is the subsampled full dataset.

  2. I believe I conducted such experiments, but not showed it in the L2P paper. However, I have to say task-specific prompts are not directly applicable to class-incremental learning, since you have no idea how to choose task-specific prompts at inference when the task ID is unknown. If I remember correctly, task-specific prompt did a bit worse than L2P on CIFAR100, but is comparable or better than L2P on 5-datasets. Intuitively, task-specific prompt does not have the ability to share knowledge between tasks, so that might be the reason. Nevertheless, feel free to add your experiments if you are interested and correct me if I am wrong.

Best, Zifeng

prateeky2806 commented 2 years ago

The reason why I asked for zero/few shot number is that I suspect that the model might perform well even when we prepend some random vectors or slightly trained vectors along with the input image because the ViT model is pre-trained on ImageNet21k and CIFAR100 is very similar to it but easier. If the model has good zero/few-shot performance then it invalidates some of the claims made in the paper regarding continual learning and preventing forgetting. Furthermore, if the performance is not better than task specific prompts then the claim regarding sharing knowledge might not be well supported as well. This comparison is completely skipped in the paper which I guess if the most important method to compare with.

Thanks, Prateek

zhangyuanscall commented 1 year ago

DualPrompt validates that share prefixtuning is better in paper 5.4?