-
https://github.com/uclanlp/visualbert#extracting-image-features
Could you go into more detail? Should we install the custom pytorch into a new virtual environment, so it doesn't break the pytorch u…
-
I used models like LXMERT and Visual Bert for VQA task on COCO dataset and was able to get a list of probabilities for possible answers (there are limited number of them). How can I get probabilities …
-
Wonderful paper. To see if I'm getting something very wrong this is my understanding of the two papers differences.
**Transformer-Explainability**
You generate the building blocks with relprop as …
-
The firt cell went through, however with the following error message:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the …
-
### Feature request
We currently have [ViLT](https://huggingface.co/docs/transformers/model_doc/vilt) in the library, which, among other tasks, is capable of performing visual question answering (V…
-
Hello, I have seen some related work on the GQA dataset. Most of them show the results on the test-dev and test-std set. Is there any result of MCAN and LXMERT on GQA test-dev and test-std set (or tes…
-
Currently, if we stop the workers:
```
kubectl scale --replicas=0 deploy/datasets-server-prod-datasets-worker
kubectl scale --replicas=0 deploy/datasets-server-prod-splits-worker
```
the star…
-
Dear scholar,
In your paper, you said modify the visual context object to another object? So the feature you changed files or data -augmentation files where to download?
-
https://arxiv.org/pdf/1908.02265.pdf
해당 논문을 읽고 모델을 구현하는 작업
-
**run the script:**
`!CUDA_VISIBLE_DEVICES=0 PYTHONPATH=`pwd` python lxmert/lxmert/perturbation.py --COCO_path val2014 --method transformer_att --is-text-pert false --is-positive-pert true`
**the…