PKU-YuanGroup / LanguageBind

【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment
https://arxiv.org/abs/2310.01852
MIT License
723 stars 52 forks source link

Can you share the NYU-D dataset you used for evaluation, e.g. how to split the dataset? #29

Closed bf-yang closed 10 months ago

LinB203 commented 10 months ago

We have uploaded here. Please pull the latest code.

bf-yang commented 9 months ago

@LinB203 Thanks for your sharing the Depth and Thermal datasets. It seems that it only has val dataset with limited data, could you further share some training data for these datasets? like NYU, LLVIP? Thanks!