-
Hi,
I'm doing as you wrote in the readme file
Now when I'm running:
` bash run/vqa_finetune.bash 0 vqa_lxr955_tiny --tiny`
I get the following error:
> Load 632117 data from split(s) train,…
-
## Problem statement
1. performance bottleneck in knowledge-based VQA due to two-phase architecture consists of knowledge retrieval from external soruces and training question answering task in super…
-
The external/VQA module is not included in the pip package
-
System enviroment:
Ubuntu20.04
torch 2.0.0+cu118
torchvision 0.15.1+cu118
Commandline:
#run_vqav2_ft.py --train_config_f…
-
I want to work on vqa2 dataset
Could you please explain how did you implement the stated line in code.
**Our best results are achieved by combining the best single relation models through weighte…
-
Thank you very much for your work. Now I want to train the model with other datasets (such as VQA-Med). What do I do with my data? We would appreciate it if you could provide us with your code for p…
-
HI @tbmoon ,
I do all step by step but
when running (python3 make_vacabs_for_questions_answers.py --input_dir='../COCO-2015')
I see some error could you help me
Traceback (most recent call l…
-
Hi,
Thank you for your great work BLIP2. I find there is no zeroshot VQA evaluation code for BLIP2-OPT, so I create one, refering to the code of FLAN-T5. However, the accuracy is very low. I will b…
-
Really want to know how to process the file about VQA dataset, such as "stat_words.json".
-
Hi,
first of all, thanks for the amazing work!
You wrote in the paper: "For a fair comparison with existing methods, we constrain the decoder to only generate from the $3,192$ candidate answers". …