-
```kotlin
/**
* Delete records that satisfy the given [predicate].
*/
open fun deleteIf(predicate: (T) -> ColumnDeclaring, limit: Int? = null): Int {
var seq = database.sequ…
-
Hi! I am trying zeroshot inference with the code below
```shell
DATA_DIR=data
DATASET=activitynet
DATASET_FILE=ActivityNet-QA
CKPT_PATH=checkpoints/frozenbilm_activitynet.pth
TRANSFORMERS_CACH…
-
Hi,
Are the code and models available for Video Captioning and QA?
Thank you.
-
Hi,
After finetuning on downstream VideoQA datasets, how does the model run in the test dataset?
I'm a little confused about this point.
Thanks
-
Is it possible to use the tool for our own videos and dataset? If yes, in addition to videos, what features are required for pre-training or fine tuning?
I assume from your readme that : [How to 10…
-
Hello,
I am trying to use your pretrained model and reproduce the results on MSVD-QA. I'm following the same hyperparameters you mentioned in the paper and use the ckpt_pt_howtovqa69m file to initiat…
-
Hi, line 42 in _eval_videoQA.py_ reads the hps file: `hps_file = f'{opts.output_dir}/log/hps.json'`
However, no **hps** file is available from running the download script of _scripts/download_tvqa.sh…
-
Hi,
In the inference we always load the best model. However, after fine-tuning there is no checkpoint named $OUTPUT_DIR/ckpt/model_step_best.pt. Can you point to the line in the code where the best…
-
Can you provide a caption csv file for the video?
-
Hi,
I really appreciate your excellent work! I tried to train on NExT-QA, but first I found that it doesn't have qns _bert file for the test.
And one more thing, when I shift to NExT-QA and chan…