-
As part of this year ActivityNet Challenge, we are providing new features extracted by other researchers.
ResNet-152 features and frames [here](http://activity-net.org/challenges/2019/tasks/anet_lo…
-
Hi, thanks for your great work. I was wondering when will you release the code for video captioning, or at least the features so that I can use MART code to generate captions. Looking forward it!
B…
-
The link to download activitynet-caption features seems broken, could you please check it?
-
The function calculate_IoU_batch in criteria.py, Why do you add 1 to the numerator and denominator when you compute the IOU? I am very confused!! It feels unfair!!
![3b7f203ac7391efa9156228c32ad496](…
-
I run the code for ActivityNet Captions dataset and it takes 3 hours for one epoch on one RTX 2080Ti. I want to confirm the time and the number of epoches.
-
I am unable to figure out how to read the C3D features from the files present in http://activity-net.org/challenges/2016/download.html . Can you please explain the same?
-
Here I am attaching the screenshot of the generated error and in the left you can see the checkpoints generated after training the model.
![Screenshot (10)](https://user-images.githubusercontent.com/…
-
>>> "As we mentioned in the paper, we didn't have access to the full dataset as ActivityNet Captions is distributed as the list of links to YouTube video. Consequently, many videos (~8.8 %) were no lo…
-
Hi, I can't figure out with this features.
The training, validation and test features, both the visuals "_resnet.npy" and the motions "_bn.npy", are the same used in [densecap](https://github.com/Luo…
-
I am trying to understand this dataset how it is annotated but I can't find how the videos are divided into segments and how to find the start and the end time of each segment on each video.