-
Can you please tell that where can I get the **annotation** files of OKVQA dataset that are used to calculate **image features** only.
-
## Environment info
- `transformers` version: Not using transformers directly, I'm loading a model "unc-nlp/frcnn-vg-finetuned"
- Platform: MacOS
- Python version: 3.8
- PyTorch version (GPU?): …
-
The [`config.json` for CTRL](https://huggingface.co/ctrl/blob/main/config.json) on the Model Hub is missing the key `model_type`.
As a result, passing the model repo as a path to `AutoModel.from_pr…
-
## Environment info
- `transformers` version: 3.3.1
- Platform: Linux-5.4.0-52-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.4
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow v…
ecekt updated
3 years ago
-
Thank you for creating this great repository! I had a few questions specifically about the CLIP demo,
- is the attention map visualization based on the method mentioned in your paper `Generic Atte…
g-luo updated
3 years ago
-
Hi,
I tried to run the [multimodal example](https://github.com/huggingface/transformers/tree/master/examples/research_projects/mm-imdb).
By running:
```
python run_mmimdb.py \
--data_dir ../d…
-
Hello,
I'm trying to apply LXMERT to a custom dataset that I've created, but the loss is stuck and not moving anywhere. Here's the main changes I've made to the config file:
`model_config:
lx…
-
Thanks for your opening sources.
I do not find codes of CATT for Image captioning task.
Will you update this project for COCO Image captioning task?
-
Can you please provide an end-to-end example of how to run AllenNLP interpret on some custom text input? Thanks!
-
## Environment data
- Language Server version: 2021.5.2-pre.1
- OS and version: Windows
- Python version (& distribution if applicable, e.g. Anaconda): Anaconda
## Actual behavio…