WillDreamer / Aurora

[NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model
https://arxiv.org/abs/2305.08381
80 stars 7 forks source link

the training set size of MSR-VTT #11

Closed Arsiuuu closed 9 months ago

Arsiuuu commented 10 months ago

Could you please tell me the training set size of Aurora for video retrieval on MSR-VTT? Because I found there are two annotation versions, 7k or 9k.

xinlong-yang commented 9 months ago

we follow the setting used in Uniadapter, which is 1k-split.