Closed bityangke closed 7 years ago
Hi Ke,
In the frame-level classification task, for each action class, we want to output a rank list of all frames (no matter whether it's bg or action) from all test videos.
I am not sure have you included windows in UCF which is also part of training data for THUMOS.
Here you go: 0.353557126593323 0.642114296162291 0.189760542773722 0.733280357644924 0.698720198345160 0.338009204462311 0.192396853482007 0.718121969400894 0.297034795495151 0.547921125656887 0.594920564381471 0.163848854637057 0.383343482364106 0.604982580023873 0.694457766845914 0.497568304164384 0.284239934631868 0.461509067905906 0.227501374780542 0.262718506391104
Best, Zheng
Hi @zhengshou ,
For Ke's second question, I also have the similar puzzle. During training, do you use all background frames of validation or just a part of? I count the number of frames for each class. And I find that background is much more than others. https://docs.google.com/spreadsheets/d/1B0ToBFPy_5GHefxXSu684mEtFcDpcgYuqhLfSVw0Inc/edit?usp=sharing
When I use training set and validation set together, the loss cannot converge well. If I use validation set only to train, the loss cannot converge, either. But when I use training set only, the loss curve looks much more reasonable. Is it because the validation set has too many background? What do you think the reason might be? Thanks a lot.
@shanshuo As we mentioned in the paper, "To prevent including too many background frames for training, we only keep windows that have at least one frame belonging to actions"
I used both train set and val set for training.
Hi Zheng, I am Ke Yang from NUDT, I send you an e-mail. But the e-mail server told me that the delivery of e-mail failed, so I post my email here: I am very sorry to bother you again. I want to ask you some questions about some details in your paper. In your CVPR 2017 paper titled "CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action Localization in Untrimmed Videos":
Thank you very much in advance!
Wish you a good day! Ke Yang NUDT 2017/03/28