-
Hi,
I really appreciate your excellent work! I tried to train on NExT-QA, but first I found that it doesn't have qns _bert file for the test.
And one more thing, when I shift to NExT-QA and chan…
-
Hi, thanks you for sharing such a great work.
I would like to know how to make dense sampling and sparse sampling after uniformly sampling K clip frames.
After sampling K clip frames _c1,...,cK_
…
-
What kind of image preprocessing is expected for the pretrained models? I couldn't find this documented anywhere.
If I had to guess I would assume that they expect RGB images with the mean/std norm…
-
Hi,
I'm wondering if these are the baselines only used for the TGIF-QA benchmark? How to start training or evaluation on the Action Genome Question Answering (AGQA) dataset?
ByZ0e updated
3 years ago
-
hi,
How to get video feature ? We should get them by ourselves ? Thank you for your best work!
Best,
J(●'◡'●)
-
_Updated by `@infin8x` on February 3 with our new epic issue template_
### Work items
#### Design and specification 📔
- [x] Design: see below
- [x] Docs plan: update examples, create migrati…
-
![error](https://user-images.githubusercontent.com/80946535/114973842-d7b17d00-9eb3-11eb-9869-0965d731dc72.png)
-
Hello,
Thank you for your open source code and your interesting work!There were some problems in the process of using it.
When I execute code on the Action task in your branch tgif-qa, I get a cuda …
-
Hello,
Thank you for your excellent work!
When I download the tgif-qa dataset, which includes approximately 124G of GIF files(9 zip splits) and some csv files with question and answer pairs, I f…
-
I only use the answer information and get very high accuracies (over 90%) on two Multi-Choice tasks (Action and Transition).
I used four different codes (HGA, LAD-Net, HCRN, ours) to investigat…