Closed syuqings closed 4 years ago
Maybe you should first run the train() process and generate the corresponding vocabulary for the ActitvtiyNet Captions dataset, and then load the pretrained model for test.
Yes, I have generated the vocabulary by running the train() process, but the vocabulary size is about 21000 which is not match with the checkpoint file.
I will check the checkpoint, and you can train the model by your own first and reproduce the results in the paper.
OK, Thanks
Hi, Yitian, I loaded the pretrained checkpoint of ActivityNet dataset but failed. The vocab size in the checkpoint is about 14000, while the word2ix.npy generated by train() is about 21000. Do you know what's the problem of it or how to get the corresponding vocabulary? Thanks.