Open liangliangdeveloper opened 11 hours ago
Also, I am wondering if the text features you provide from the llama are the result of the original llama or the llama model fine-tuned on your data.
Hi, can you provide a more detailed example of the first question? If you are referring to the Temporal Grounding task, the features we extract are not all less than 5 in token length, and the token length is positively correlated with the length of the word in the sentence. In addition, in the grounding task, we did not fine-tune the llama model
Thank you for your reply!
For the first question, I see that you add a prefix: "summarize:" to all prompts. That's why the token number is different. So why do you use this design?
For another question, I am wondering if the llama is fine-tuned in CLIP training. Does the llama freeze all parameters or the lora layers is trainable in the CLIP training stage?
Dear team,
Thanks for your great job.
I would like to know how to get the text feature for the grounding task.
I see that you utilize the LLAMA backbone with chinese_alpaca_lora_7b, however, I see a mismatch for the token dim.
The number of tokens for the tokenized original sentence and the number of tokens' dim in the text feature you extracted is always smaller by 5, which is a consistent number.
I want to know, except for the global token, are there any new tokens added in the sentences?
Thank you!