Open Coobiw opened 1 year ago
Hi, I believe you have an older version of the model files in which I had commented some lines for testing. Please re-download the package from the link and let me know if you have any issue. The visual prompts are added to the input of every transformer encoder of the model, i.e. VisionTransformerPromptDeep.
Thanks! I will update it! Do you mean that the visual-prompts are added to every transformer block?
Yes
Thanks for your answer! I've got the visual prompt ops. Additionally, if convinient, I would like to ask you for the performance difference between VisionTransformerPromptDeep and VisionTransformerPrompt, i.e. whether adding it to the layer_0 or every layer will lead some gap?
Hello,thanks for your great work. But I have some questions about the visual prompts especially the modificaitons on timm. Firstly, I find that you have annotated the code below:
So,is this code work now? What is the current version implement of visual prompt?
And then, I want to know where the visual prompt added to the ViT. The code bellow shows that you concat [cls] token, visual prompt , image patch tokens together on the 'seq length' dim, isn't it?
So just do self-attn on the learnable visual prompt and image patches tokens on the input layer? Or every layer except the input layer which is the annotated code do?
The former:
The Latter:
Thanks for your answering!