OpenGVLab / InternVideo

[ECCV2024] Video Foundation Models & Data for Multimodal Understanding
Apache License 2.0
1.43k stars 88 forks source link

About the text features used in Grounding task. #213

Open liangliangdeveloper opened 11 hours ago

liangliangdeveloper commented 11 hours ago

Dear team,

Thanks for your great job.

I would like to know how to get the text feature for the grounding task.

I see that you utilize the LLAMA backbone with chinese_alpaca_lora_7b, however, I see a mismatch for the token dim.

The number of tokens for the tokenized original sentence and the number of tokens' dim in the text feature you extracted is always smaller by 5, which is a consistent number.

I want to know, except for the global token, are there any new tokens added in the sentences?

Thank you!

liangliangdeveloper commented 11 hours ago

Also, I am wondering if the text features you provide from the llama are the result of the original llama or the llama model fine-tuned on your data.

yinanhe commented 11 hours ago

Hi, can you provide a more detailed example of the first question? If you are referring to the Temporal Grounding task, the features we extract are not all less than 5 in token length, and the token length is positively correlated with the length of the word in the sentence. In addition, in the grounding task, we did not fine-tune the llama model

liangliangdeveloper commented 11 hours ago

Thank you for your reply!

image

For the first question, I see that you add a prefix: "summarize:" to all prompts. That's why the token number is different. So why do you use this design?

For another question, I am wondering if the llama is fine-tuned in CLIP training. Does the llama freeze all parameters or the lora layers is trainable in the CLIP training stage?