Closed ecekt closed 4 years ago
As for guidelines about making MMBT work, here is an example on the mm-imdb dataset: https://github.com/huggingface/transformers/blob/master/examples/mm-imdb/run_mmimdb.py.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hi seems like above examples folder has been removed, is it because the multi modal experiment is in intermediate stage.
I believe it's available here: https://github.com/huggingface/transformers/tree/master/examples/contrib/mm-imdb
Hey is there any progress with it soon? I find only the mm-imdb example: https://github.com/huggingface/transformers/tree/master/examples/contrib/mm-imdb
Your LXMERT model receives only text features from what I see ("visual_feats - These are currently not provided by the transformers library.")
Thanks :)
The example for mmbt on mm-imdb is also an invalid link now.
Here is a correct link for now https://github.com/huggingface/transformers/tree/master/examples/research_projects/mm-imdb
As for guidelines about making MMBT work, here is an example on the mm-imdb dataset: https://github.com/huggingface/transformers/blob/master/examples/mm-imdb/run_mmimdb.py.
The link is broken!
The link is broken!
See the reply above you :) That seems to work
Hello, it would be great if more multimodal BERT models are included in the library. I have noticed that MMBT from Facebook is provided; however, I was unable to find some guidelines about how to make it work with the help of 🤗 Transformers.
Possible models can be VilBERT, VL-BERT, VisualBERT, VideoBERT and so on.
Best regards.