clip-vil / CLIP-ViL

[ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383
MIT License
396 stars 36 forks source link

Where can I found annotations for SNLI-VE? #32

Closed 1219521375 closed 8 months ago

1219521375 commented 1 year ago

It seems that the model uses different annotation files from the code. What's the difference between it different and the original SNLI-VE jsonl file?And where can I found it? Can you share it with us? Thank you in advance!

# CLIP-ViL-Pretrain/src/tasks/snli_data.py
text_db_paths = {
    "valid": "/local/harold/ubert/clip_vlp/lxmert/data/snli_ve/txt_db/ve_dev.db",
    "train": "/local/harold/ubert/clip_vlp/lxmert/data/snli_ve/txt_db/ve_train.db",
    "test": "/local/harold/ubert/clip_vlp/lxmert/data/snli_ve/txt_db/ve_test.db",
}
1219521375 commented 1 year ago

I use annotation files download from UNITER and found it works.