THUDM / P-tuning-v2

An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
Apache License 2.0
1.97k stars 201 forks source link

Question about the implementation details of prompt depth experiment #48

Open bj1103 opened 1 year ago

bj1103 commented 1 year ago

Hello, I wonder to know how you implement the prompt with depth less than model's layers. Huggingface requires length of past_key_value to match the model's config.n_layers, so I think that we can't not just pass prompt which does not match layers to past_key_value. Besides, it seems that layers can't share same attention_mask if some of them have prompt and some of them don't.

Thanks!