mynlp / cst_captioning

PyTorch Implementation of Consensus-based Sequence Training for Video Captioning
60 stars 17 forks source link

the give val-jason file has some problems, can you share the val-jason file? #4

Open hugddygff opened 6 years ago

hugddygff commented 6 years ago

Thanks a lot!

plsang commented 6 years ago

val data is stored in the train_videodatainfo.json file, which is provided by the organizer. Can you refer to the solution in #1?

hugddygff commented 6 years ago

I'm feel sorry for the mistake, And I'm interested in the baseline model, because I use tf more and can't understand the pytorch code now, I read the paper, It is more like the v2_nagivator model? And how to use the feature to initial the lstm's state? I can think about two idea, 1:use the feature and MLP layer to initial lstm's h and c 2:use the feature input to the first lstm time step, then the second lstm use the "bos" . which is used in the paper? Thanks again.

plsang commented 6 years ago

I'm not sure what you mean. I guess you can find the details in the model.py file.