WillDreamer / Aurora

[NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model
https://arxiv.org/abs/2305.08381
83 stars 7 forks source link

the training set size of MSR-VTT #11

Closed Arsiuuu closed 11 months ago

Arsiuuu commented 1 year ago

Could you please tell me the training set size of Aurora for video retrieval on MSR-VTT? Because I found there are two annotation versions, 7k or 9k.

xinlong-yang commented 11 months ago

we follow the setting used in Uniadapter, which is 1k-split.