-
runfile('F:/project/vqa.pytorch-master/demo_server.py', wdir='F:/project/vqa.pytorch-master')
File "F:\project\vqa.pytorch-master\demo_server.py", line 80
visual_data = visual_data.cuda(async=…
ghost updated
4 years ago
-
Thank you for your open source code, but I encountered problems when reproducing it, and failed to achieve the results in the paper in the SROIE-VQA data set. Could you please help me adjust the code …
8652 updated
5 months ago
-
I was fine-tuning on VQA using VinVL features using the given scripts. However I am getting 74.82 evaluation accuracy, which is 1.3 lower then the reported one (76.12). It would be helpful if anyone c…
-
When I try to execute the below code:
_```
from donut import DonutModel
import torch
from PIL import Image
pretrained_model = DonutModel.from_pretrained("naver-clova-ix/donut-base")
if tor…
-
Hello, thank you very much for your open source. I found that for the blip2 model, there is only the code for the caption task training, and there is no code for the vqa task. What should I do?
-
See #44 for more details about the dataset and network architecture.
The task is:
1. Train a baseline VQA model with decent accuracy. Try using tools from MONAI as much as possible. (related to …
-
```
import pydot
from keras.utils.vis_utils import plot_model
model_vqa = get_VQA_model(VQA_model_file_name, VQA_weights_file_name)
plot_model(model_vqa, to_file='images/model_vqa.png')
```
…
-
Thank you very much for your work. Now I want to train the model with other datasets (such as VQA-Med). What do I do with my data? We would appreciate it if you could provide us with your code for p…
-
Hi, how can I add the visual7w dataset for the VQA task? The adding datasets documentation is for AVSD task and I'm not sure how to do similar steps but for a different task... My data has images, que…
-
Could anyone tell me where I can download the VQA-CP v2 dataset? The link in the README file is dead.
Thanks!