-
with this command on vqa v1.0:
python train.py --vqa_trainsplit trainval --path_opt options/vqa/mutan_att_trainval.yaml
-
This is maybe a trivial question but I'm completely new to torch, I tried to search on Google but no luck. I'm working with a Ubuntu 14.04 machine, cuda 7.0 and cudnn R4 version. I prepared all traini…
-
- [VQAv2](https://arxiv.org/pdf/1612.00837v3)
- [TallyQA: Answering Complex Counting Questions](https://arxiv.org/pdf/1810.12440)
- [GQA: A New Dataset for Real-World Visual Reasoning and Compos…
-
where is the VQA_gt.py?
where is the VQA_gt.py?
where is the VQA_gt.py?
-
When I trian on VQA v2 by default:
CUDA_VISIBLE_DEVICES=$1 python src/vqa.py \
--train train \
--valid train \
--test minival,nominival \
It reported: No such file or…
-
Thanks a lot for your excellent job. I wonder how you evaluate the trained model, do you use ./scripts/more/eval/pope.sh, which uses llava.eval.model_vqa_loader for evaluation (seems no modification f…
-
thanks for your great work!
I want to view the results after evaluation, where to find the WandB project "llm-driver"?
-
runfile('F:/project/vqa.pytorch-master/demo_server.py', wdir='F:/project/vqa.pytorch-master')
File "F:\project\vqa.pytorch-master\demo_server.py", line 80
visual_data = visual_data.cuda(async=…
ghost updated
4 years ago
-
In the vqa_dataset.py, I don't understand when the split != 'train', self.annotation will be set to 'color_test.json'. Should we modify this name manually when we are testing 'shape' or 'texture'?
!…
k101w updated
9 months ago
-
Hi, how can I add the visual7w dataset for the VQA task? The adding datasets documentation is for AVSD task and I'm not sure how to do similar steps but for a different task... My data has images, que…