-
Hi @zinengtang,
Can you share your finetuned VQAv2 checkpoint? because I could not find it
Thanks!
-
### Question
I downloaded '_llava-1.5-7b_' as '_model_base_', and downloaded the lora wights '_llava-v1.5-7b-lora_' as '_model_path_'.
I ran the vqav2.sh provided by the author, trying to reproduc…
-
* Name of dataset: VQAv2
* URL of dataset: https://visualqa.org/download.html
* License of dataset: Commons Attribution 4.0 International License
* Short description of dataset and use case(s): VQA…
-
I see that there is option to choose dataset to be `VQA`. Wanted to know if I could train it on `VQAv2` and if so how?
-
Hi, thanks again for the nice work! I was trying to reproduce the experiments in VQAv2 using your pretrained weights and evaluate using this [repo](https://github.com/GT-Vision-Lab/VQA) mentioned in t…
-
- [VQAv2](https://arxiv.org/pdf/1612.00837v3)
- [TallyQA: Answering Complex Counting Questions](https://arxiv.org/pdf/1810.12440)
- [GQA: A New Dataset for Real-World Visual Reasoning and Compos…
-
Thanks for the great work.
Will the code related to the following table be open source soon?And does the current code support okvqa finetune?
![image](https://user-images.githubusercontent.com/291…
-
The paper shows experiments with BLIP2 finetuned on VQAv2, but the finetuned models aren't listed in the model zoo or available on HuggingFace hub.
Any plans to release these? Thanks
-
May I ask how you evaluated on the vqav2 dataset? I couldn't find the annotation file for the test set on the official website.
-
I tried to reproduce the finetuning results of BLIP2 FlanT5xl on VQAv2, but the results I got are far from those in the paper. I only got the highest accuracy of 76.58% while the paper is 81.55%, I wa…