-
Hi,
When I run "bash baselines/crossmodal_moment_localization/scripts/inference.sh MODEL_DIR_NAME val" everything works as expected.
However when I run "bash baselines/crossmodal_moment_localiza…
-
# ❓ Questions & Help
Hello, congrats to all contributors for the awesome work with LXMERT! It is exciting to see multimodal transformers coming to hugginface/transformers. Of course, I immediately …
LetiP updated
3 years ago
-
Hi there.
I was trying to use multi-gpu for training. So I put the gpu ids in '--device_ids', baselines/crossmodal_moment_localization/config.py.
I fixed the code like below.
`
if opt.tra…
-
Hi there!
Thanks for sharing your great work.
It seems you conducted experiments with the DiDeMo dataset without using subscript information to check the performance of your method.
I have a couple…
okisy updated
3 years ago
-
**Describe the bug**
When trying to migrate `crossmodal` search example, I try to change the logic by mantaining inside `txt_emb` folder a simple config:
```yaml
!VSETextEncoder
metas:
py_modules:
…
-
In the paper 3.1 , the source modal use $Z_{\beta}^{[0]}$ and the intermediate-level use $Z_{\beta}^{[i-]}$ . How to get the $Z_{\beta}^{[i-]}$ and what it is mean?
Looking forward for your reply.Tha…
-
I want to use the CrossModality , how do I extract it
-
Hi there,
I just tried bitpoll to plan the talks/discussions for our upcoming "Symposium on crossmodal learning", which runs for three days. We have email feedback from several people listing times…
-
Hi
When I try to run main.lua I get this error message :
/data/vision/torralba/crossmodal/flickr_videos/scene_extract/lists-full/_b_beach.txt.train : No such file or directory
Do I have to…