-
I am trying to train TransformerXL model on video captioning task.
command:
python train_caption.py -c config/caption/paper2020/anet_coot_vidclip_mart.yaml --coot_feat_dir embeddings
But I get …
-
Hi, @ArrowLuo , many thanks for your previous replies, very helpful.
May I ask is the provided weights based on the pre-trained work on Howto100M dataset? When I do the video captioning downstream …
-
Hello
Thanks for the work, I'm not good at programming so please let me know if this question is even necessary.
I have tried image captioning before and normally I can receive captions for any rand…
-
Fields required:
1. ID
2. Name
3. Short Description
4. Short ID (generated)
5. URL for more information
-
Hi Vladimir,
Noticed in the MDVC codebase that you load the I3D CONV features from "./data/sub_activitynet_v1-3.i3d_25fps_stack24step24_2stream.hdf5"
Some questions:
(i) Do you have a script th…
-
Hi, @ArrowLuo I did training in fine-tuning stage for video captioning task. However, there is error of 'RuntimeError: Default process group has not been initialized, please make sure to call init_p…
-
Hi,
Thanks for sharing! If I want to apply the pre-trained model for a small Chinese dataset (with videos), is it possible and flexible?
-
Hello, yiskw713:
I am rebuilding your repo and during the rebuilding, I was confused about some concepts in ```config.yaml```.
First, is ```dataset_dir``` means the dir of features extracted by …
-
The model is not giving the same results when tested on the same videos you have uploaded .
-
First of all Thank you for doing this amazing work.
I am trying to run inference on my own dataset but first I want to check the code if it works fine for me. Now I am trying to run validation of vi…