-
The command I used is `python third_to_first_person.py.`
and then with some problems
1. the CharadesEgo_val_video part always 0 samples loaded?
cachefile ./caches/third_to_first_person//Charad…
-
Can anyone please share the code on how to extract the features using I3D.
Thank you
-
Hi!
What's the difference between “same person ”and "different person" in paper?
![image](https://user-images.githubusercontent.com/27202239/54086930-18f80a00-4389-11e9-964b-508b9d995e62.png)
-
Hi!
Could you please release the code for extracting CLIP features from the ActivityNet dataset?
Thank you very much! I eagerly await your response.
-
In charades, a lot of fantastic artworks is being created by users presenting the phrases. It would be nice to save final state of the canvas after each round and give an opportunity to users to brows…
-
@piergiaj: Thanks for sharing the implementation in pytorch. Seems like in your codes, the only normalisation step performed is center_crop (224px). Don't we need the images to be mean_substracted by …
-
Thanks for your code. However, there are only checkpoints pre-trained on ImageNet and Charades, while Kinetics-400 is more commonly used for pre-training. I checked the Tensorflow version from DeepMin…
-
Hello @tomrunia,
Le me first thank you for you work.
l'm looking for the fine-tuned I3D kinetics-400 model on UCF-101. Is it available ?
Thank you
-
Thank you for your work! I am interesting with it~ But when I reproduce this work with charades dataset for “Base w/o prompts” setting,the results about [m=0.5, m=0.7] is lower than paper. I obtain I…
-
Hi, thanks for sharing the code. I checked the 'charades_train_pseudo_supervision_TEP_PS.json' file. I think the data(timestamp and pseudo query) has already been extracted. Can you share the TEP and …