-
According to the "Implementation Details" part in Section 4 of the original paper, you use the model pre-trained on the training set of ActivityNet-1.3 as the features extractor. And I don't make anyt…
-
Hi!
Could you please release the code for extracting CLIP features from the ActivityNet dataset?
Thank you very much! I eagerly await your response.
-
I am trying to extract RGB frames by following ActivityNet Readme.
However, when I run video2npy.py, it cannot read frames for some videos .
In detail, VideoCapture.read() returns False while get(…
-
Dear @Huntersxsx, Thanks for your interesting work.
I have achieved similar results on Charades-sta and Tacos. However, I encountered a problem with ActivityNet.
> "UserWarning: Detected call of…
-
Hi Luowei,
Thanks for sharing your code on frame-wise feature extraction. I am currently using this code to extract frame-wise features for ActivityNet-Entities dataset. I have a problem with the f…
-
Hello, thanks for your excellent job. I am interested in your work so much!
May I ask if the extracted features on ActivityNet, GTEA and BEOID will be released?
-
Hello. Could you please provide the dataset structure of ActivityNet?
Thanks
-
This is a very meaningful job, but I would like to ask how to obtain the video data for the Charades-Sta and ActivityNet Captions videos?
-
Hi, I trained your pytorch code on activityNet v1.2 dataset. But I can only get the results as follows.
tIou@0.1= 47, tIou@0.2= 44, tIoU@0.3=40, tIou@0.4=37, tIou@0.1= 33....it is much lower than…
-
Hi ,
I am planning to train a Boundary Matching Network (BMN) model for Temporal Action Localization. I have created the annotation file following ActivityNet. For training BMN , I need to extract t…