-
Hi Team,
One clarification question regarding your VISOR -> Epic Kitchens frame index mappings:
On the VISOR README, it is mentioned that the Ground Truth - Sparse Annotations' frame number, as …
-
Hi,
First of all, thanks so much for sharing your outstanding work!
I would love to hear some of your Intuition and thoughts about handling egocentric data:
1. Do you think the model will fare we…
-
Hello, thanks for your wonderful work! I'm interested in the egocentric action recognition task and try to do some further research based on this project. So could you please provide the script of fi…
-
Hi there, great work! I'm trying to use the video backbone of EgoVLP alone to extract intermediate feature maps (for a downstream task) on EPIC-Kitchens 100 videos. Two questions:
- Any demo code ava…
-
Thanks for this wonderful work!
I can't find the ground truth file of test dataset in this repository and the competition [EPIC-KITCHENS-55 Action Recognition](https://competitions.codalab.org/comp…
-
In you paper, you show a great result on EPIC-Kitchens dataset. How to reproduce that. Is there any code for it? Thanks.
-
Hi, thanks for sharing the nice work!
I'm planning to run the 'AVION/scripts/main_videomae_pretrain.py' and it needs the '--train-metadata' argument. I checked the [rest of the code](https://github…
-
When preparing EPIC-Kitchens 100 for use with MTV, I'm finding that to set up the CSV to pass into DMVR, I would need to define a single label by encoding the verb and noun labels. How did ViViT and M…
-
Hi there,
Thanks for the work.
Seems that the link of the pretrained model on EPIC-KITCHENS-100 is disabled. Could you update the link?
![link_disabled](https://github.com/epic-kitchens/epic-kitch…
-
Is there any pretrained model on Kinetics or Something Something v2 or EPIC-KITCHENS-100 dataset?