-
Currently when I infer or evaluate a video with the pretrained ActivityNet model, the action label is always 0[time segments are available as expected]. The reason I guess is the pretrained model was …
-
I would like to ask a question about the dataset. I looked at some papers, and I found that some papers are testing the modal performance on the ActivityNet Captions validation set, while some papers …
-
The PredNet model uses sequences of images as input and we need to extract this information from the Moments in Time video dataset. More specifically, we will focus on the Moments in Time "Mini" versi…
-
Thanks for your great work,I've been trying to reproduce this code recently and I'd like to ask how to get this external classification scores. Does it come from I3D results ?
我的英语不是很好,我想问下您external…
-
您好,最近在调式您公布的代码,遇到了一点技术上的问题。详情如下,希望能得到您的解答,谢谢!
1、使用的是数据集activitynet,使用的特征是 http://activity-net.org/challenges/2016/download.html 上的PCA_activitynet_v1-3.hdf5
2、但是运行的过程中,出现了这个error提示:“OSErr…
-
-
Hi @ttengwang ~
Thanks for the sharing of your wonderful work! I want to caption my custom video, but unfortunately I find that most codes for captioning are starting from extracted features, and lit…
-
Hi, I have a question about the evaluation.
I tried the testing, and I didnt get errors, but the result I get is every time:
'METEOR', [0.0, 0.0, 0.0, 0.0]), ('Recall', [0.0, 0.0, 0.0, 0.0]), (…
-
![WX20210203-180848](https://user-images.githubusercontent.com/16661965/106821389-0b1d9300-664b-11eb-9b67-17714f7d0e5e.png)
The model I used is tsn_r50_320p_1x1x8_50e_activitynet_clip_rgb on Activity…
-
For THUMOS-14, the video features are extracted using TSN model pre-trained on Kinetics. Can provide THUMOS-14 features extracted by TSN model pre-trained on Anet1.3(Xiong et. al. submit to ActivityN…