-
In this section of the readme, it says that fine-tuning should only take 2 hours per epoch:
'''
If no bug came out, then the model is ready to be trained on the whole VQA corpus:
bash run/vqa_f…
-
Hello,
I am trying to run the pre-training of the model again. When I run the command:
`
bash run/lxmert_pretrain.bash 1,2 --multiGPU --tiny
`
I get the following output:
```
Load 174866 …
-
Thanks for sharing this code. When I'm performing finetuning with VQA, my RAM usage blows up. With `num_workers` set to 4, it requires 207 GB. I've tried with different batch sizes also. The script wi…
-
Just a small typo. LXMBERT should be LXMERT in the diagram.
-
Thanks for your fantastic work. The code is clear and operation manual is detailed. There is still one thing I want know. Does make lxmert support fp16 in your plan? When I reproduce your work in …
-
https://github.com/airsplay/lxmert/blob/fc1b287cce280f717a3b1b076c2c23dfd492b880/data/mscoco_imgfeat/extract_coco_image.py#L62
a small bug: IMG_ROOT --> img_root :)
-
If possible, can you guys release a fine tuned snapshot of the model on VQA dataset?
Thanks.
ojus1 updated
5 years ago
-
When you ran the NLVR2 fine-tuning and reported your result in this issue https://github.com/airsplay/lxmert/issues/1#issuecomment-523983982, the result (dev performance) was 74.39. Similarly, when I …
-
Hi, thanks for releasing your code! I'm not able to reproduce your fine-tuning result on NLVR2. I followed your instructions by downloading the pre-trained model, downloading the image features, pre-p…
-
When trying to run feature extraction using the docker image, I am running
`CUDA_VISIBLE_DEVICES=4 python extract_nlvr2_image.py --split train`
and I get the error
`python: can't open file 'extrac…