Closed tfausten closed 2 weeks ago
@deepanks @tfausten @jiasenlu : Did you resolve this issue?
I get the following error when running the VQA task:
Traceback (most recent call last): File "eval_tasks.py", line 228, in <module> main() File "eval_tasks.py", line 209, in main task_id, batch, model, task_dataloader_val, task_losses, results, others) File "/home/tobias/vilbert_beta/vilbert/task_utils.py", line 353, in EvaluatingModel question = question.view(-1, question.size(2)) IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 2)
I am using the downloadable coco resnet features as data and the following command to run the script:
python eval_tasks.py --bert_model bert-base-uncased --from_pretrained \ save/VQA_bert_base_6layer_6conect-pretrained/pytorch_model_19.bin \ --config_file config/bert_base_6layer_6conect.json --task 0 --split test --batch_size 100
It seems that the tensors loaded from the data do not have the right dimensions?
Any help is appreciated! Thanks!
@deepanks @tfausten @jiasenlu : Did you resolve this issue?
I get the following error when running the VQA task:
Traceback (most recent call last): File "eval_tasks.py", line 228, in <module> main() File "eval_tasks.py", line 209, in main task_id, batch, model, task_dataloader_val, task_losses, results, others) File "/home/tobias/vilbert_beta/vilbert/task_utils.py", line 353, in EvaluatingModel question = question.view(-1, question.size(2)) IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 2)
I am using the downloadable coco resnet features as data and the following command to run the script:
python eval_tasks.py --bert_model bert-base-uncased --from_pretrained \ save/VQA_bert_base_6layer_6conect-pretrained/pytorch_model_19.bin \ --config_file config/bert_base_6layer_6conect.json --task 0 --split test --batch_size 100
It seems that the tensors loaded from the data do not have the right dimensions? Any help is appreciated! Thanks! Here comes another error after following your command:
vilbert_beta/vilbert/datasets/_image_features_reader.py", line 62, in __getitem__ index = self._image_ids.index(image_id) ValueError: b'1' is not in list
Any suggestion? Thanks!
@arjunakula @Zzmonica No sorry, I didn't follow up on the issue.
Hi, I was able to fix the issues with VQA and created a new pull request here: https://github.com/jiasenlu/vilbert_beta/pull/27
@Zzmonica, @deepanks @tfausten @jiasenlu: Please let me know if you find any issues with the updated code. Thanks!
Closing the issue.
Well done! I found that 'TASK0' is no need to run these lines of code. And Thanks!
Arjun Akula ( Arjun Reddy Akula ) notifications@github.com 于2019年10月27日周日 上午8:02写道:
Hi, I was able to fix the issues with VQA and created a new pull request here:
27 https://github.com/jiasenlu/vilbert_beta/pull/27
@arjunakula https://github.com/arjunakula @Zzmonica https://github.com/Zzmonica, @deepanks https://github.com/deepanks @tfausten https://github.com/tfausten @jiasenlu https://github.com/jiasenlu: Please let me know if you find any issues with the updated code. Thanks!
Closing the issue.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jiasenlu/vilbert_beta/issues/18?email_source=notifications&email_token=AF7PWINWEFDZSZXQ734ITSDQQTLAPA5CNFSM4IXJ24MKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECKTMWY#issuecomment-546649691, or unsubscribe https://github.com/notifications/unsubscribe-auth/AF7PWIIFKIWOSMRSZ4RWIGTQQTLAPANCNFSM4IXJ24MA .
How to evaluate VQA on test set?
@pkulangzi get the results json file from the directory ( something like vilbert_beta/results/VQA../test_results.json) and then upload it as test-standard on the VQA Eval AI website: https://evalai.cloudcv.org/web/challenges/challenge-page/163/overview
@arjunakula Have you train the VQA task ?
@arjunakula Have you train the VQA task ?
Yes, I used ViLBERT to train and test VQA task. I got 71.0% on test data (same as reported in the paper: 70.92%).
The original repo have some issues. As my pull request is not yet merged with original repo, you can use my github repo https://github.com/arjunakula/vilbert_beta. Let me know if you run into any issues.
Actually, I have raan into some errors. I'll try your repo.Thanks!
@arjunakula Have you train the VQA task ?
Yes, I used ViLBERT to train and test VQA task. I got 71.0% on test data (same as reported in the paper: 70.92%).
The original repo have some issues. As my pull request is not yet merged with original repo, you can use my github repo https://github.com/arjunakula/vilbert_beta. Let me know if you run into any issues.
hi,only train data and val data have answers, which are in train_target.pkl and val_target.pkl, but test data doesnot have answers, how could you get 71.0% on test data. thanks
Accuracy on Test data: You should upload your answers for the test questions onto EvalAI server https://visualqa.org/challenge.html to get accuracy on test split
Hello, I reported an error when training at VQA according to official instructions: FileNotFoundError: [Errno 2] No such file or directory:'/data/. . . /VQA/cache/train_target.pkl' Could you tell me how to get this pkl file? thank you very much : )
Hello, Did u finetune the vilbert for VQA? If so, Can I know how to do it. Thanks in advance :)
I get the following error when running the VQA task:
I am using the downloadable coco resnet features as data and the following command to run the script:
It seems that the tensors loaded from the data do not have the right dimensions?
Any help is appreciated! Thanks!