google-research / l2p

Learning to Prompt (L2P) for Continual Learning @ CVPR22 and DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning @ ECCV22
https://arxiv.org/pdf/2112.08654.pdf
Apache License 2.0
408 stars 41 forks source link

Method issue #18

Closed JH-LEE-KR closed 1 year ago

JH-LEE-KR commented 2 years ago

Hi,

I am trying to do some experiments using l2p and there's something a little strange about l2p.

According to the contents of the paper and my understanding, the prompts in the prompt pool should be selected evenly for each task. However, when I look at the prompt index and the Tensorboard histogram, it seems that only a few prompts are learned. Only the 3rd, 6th, 7th, and 9th prompts are used in the official code of the reproduce result.

Did I misunderstand? or was the code implemented incorrectly?

I look forward to hearing from you.

Best. Jaeho Lee.

GengDavid commented 2 years ago

Maybe this issue correlate with #9

Dicer-Zz commented 2 years ago

The recreated code for text classification (in NLP) has same question. I think the reason is:

First, we select a group of prompts (top k) its initialized key is most close to the querys of the first task. Then, optimize that prompts and corresponding keys. When training the second task, we select same prompts to the first one because of the keys has been pull to the centre of all task by optimization.

I think the initial method is crucial for this quesiton, but get same result even I tried all combination method mentioned in the code.

May I miss any constraint?