-
Hi! Could you upload your model files? Thanks!
Best regards.
```
` # Model config
model_config = PretrainedConfig.from_json_file(opts.model_config)
model_config.pretrain_tasks = []…
-
Hey there!
I found that when trying to train the model there was an error, because the _read_tsv's call to open on line 63 in Utils.py didn't have the encoding specified.
To fix it I changed it fr…
-
This issue contains the test results for the upstream sync, develop PR, and release testing branches. Comment 'proceed with rebase' to approve. Close when maintenance is complete or there will be prob…
-
This issue contains the test results for the upstream sync, develop PR, and release testing branches. Comment 'proceed with rebase' to approve. Close when maintenance is complete or there will be prob…
-
This issue contains the test results for the upstream sync, develop PR, and release testing branches. Comment 'proceed with rebase' to approve. Close when maintenance is complete or there will be prob…
-
This issue contains the test results for the upstream sync, develop PR, and release testing branches. Comment 'proceed with rebase' to approve. Close when maintenance is complete or there will be prob…
-
Hi!
I successfully continued teaching the clip on my own data.
Created flax_model.msgpack and config.json.
How can I test the model?
Thanks,
G.
-
Hi,
very nice work and repo. I am trying it out, but as soon as I run 'lxmert_valse_eval.py' i get the error:
`from processing_image import Preprocess`
`ModuleNotFoundError: No module named 'p…
-
Hi, thanks for the nice work and code! BLIP outputs natural texts for VQA tasks with its decoder, which is unlike UNITER/LXMERT/etc that have the encoder-only architectures. So, I'm wondering how do y…
-
After following the installation steps and downloading Coqui's model, I'm getting the following error when loading the model. What am I missing?
![image](https://github.com/kanttouchthis/text_gener…