google-research / l2p

Learning to Prompt (L2P) for Continual Learning @ CVPR22 and DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning @ ECCV22
https://arxiv.org/pdf/2112.08654.pdf
Apache License 2.0
416 stars 42 forks source link

About optionally diversifying prompt-selection #9

Closed Dicer-Zz closed 2 years ago

Dicer-Zz commented 2 years ago

Thanks for the great idea and the result!

As the title says, I'd like to know how to use optionally diversifying prompt-selection, I don't see where to use the arguments for this method, nor do I see an implementation of it in . /models/prompt.py

I would like to ask about how to normalize the frequency of each prompt into a penalty factor, I don't see a specific description in the paper.

KingSpencer commented 2 years ago

Hi thanks for your question. Actually in this repo, we use a simpler yet as effective version. The optional argument to control the diversified prompt selection is config.use_prompt_mask. If set True, at training time, only disjoint sets of prompts are trained for each task so that we kind of force the model to use different prompts.

GengDavid commented 2 years ago

@KingSpencer I have the same question as @Dicer-Zz. You mentioned the option "use_prompt_mask" to use "disjoint sets of prompts " for each task. However, since there are only 10 prompts in the pool, top-k*num_tasks is larger than the number of prompts. I think this option cannot have the same effect (i.e., "diversify" the selection) as the paper said.