-
Please help add our CVPR 2022 paper & code in this great repo.
Paper: [OpenTAL: Towards Open Set Temporal Action Localization](https://arxiv.org/pdf/2203.05114.pdf)
Code: [https://github.com/Cogi…
-
Hi ,
I am planning to train a Boundary Matching Network (BMN) model for Temporal Action Localization. I have created the annotation file following ActivityNet. For training BMN , I need to extract t…
-
I am new to here , I want to know , can the output of the temporal action detection model be the video of the detected action category and boundary ?
vivva updated
3 years ago
-
Hi! thanks for your great work, look forward to your way to training and evaluating on our own dataset ?
-
### What is the problem this feature will solve?
i would like to extract feature using for temporal action localization. But I train a I3D model used to extract feature. following the [README.md](htt…
-
Hi,
I was wondering if the code for evaluating on AVA-Kinetics is available somewhere in the current codebase.
In the file 'https://github.com/OpenGVLab/InternVideo/blob/main/Downstream/Spatial-…
-
Hi, I am using the attention cluster model for video classification. The original paper shows that the model uses the rgb attention, the flow attention, and the audio attention. However, there are onl…
-
We keep this issue open to collect feature requests from users and hear your voice. Our monthly release plan is also available here.
You can either:
1. Suggest a new feature by leaving a comment.
…
-
Hello, thanks for the great work. I want to reproduce the detection results. Could you provide the code for calculate mAP at different tIoU thresholds and the average mAP on Anet dataset, please?
-
Hi,
Congratulations on such a nice work! Also, thank you for open-sourcing the code!
We are trying to use this code on our raw untrimmed videos and want to use this framework for temporal action l…