Closed akira-l closed 2 years ago
This is finetuning from imagenet initialisation, to finetune from the pretrained model download it
https://www.robots.ox.ac.uk/~maxbain/frozen-in-time/models/cc-webvid2m-4f_stformer_b_16_224.pth.tar
Then add
"load_checkpoint": PATH_TO_DOWNLOADED_MODEL
to configs/msrvtt_4f_i21k.json
Thanks! I will try to implement the pretaining with this checkpoint.
BTW, I have finetuned the checkpoint WebVid2M+CC3M+COCO, 4-frames, base_patch_16_224. I achieve:
epoch : 27
loss_0 : 0.5593496548643809
val_loss_0 : 1.0900163816081152
val_0_t2v_metrics_R1: 28.4
val_0_t2v_metrics_R5: 55.8
val_0_t2v_metrics_R10: 67.5
val_0_t2v_metrics_R50: 88.6
val_0_t2v_metrics_MedR: 4.0
val_0_t2v_metrics_MeanR: 29.498
val_0_t2v_metrics_geometric_mean_R1-R5-R10: 47.46994959864713
val_0_v2t_metrics_R1: 28.6
val_0_v2t_metrics_R5: 56.2
val_0_v2t_metrics_R10: 68.4
val_0_v2t_metrics_R50: 88.9
val_0_v2t_metrics_MedR: 4.0
val_0_v2t_metrics_MeanR: 26.8585
val_0_v2t_metrics_geometric_mean_R1-R5-R10: 47.90558524371724
Validation performance didn't improve for 10 epochs. Training stops.
Are these results correct? The R1 is slightly lower than reported values (31.0). Is this regular fluctuation?
I got the similar results as in paper with batch_size = 64.
run the test script
test.py --resume PATH_TO_FINETUNED_CHECKPOINT --sliding_window_stride 12
the sliding window stride argument adds temporal averaging over multiple frame samples :)
Exactly! I got further improvements with this testing script. R1 comes to 33.7.
I experiment the finetune procedure and run the command
python train.py --config configs/msrvtt_4f_i21k.json
.I got:
There are two R1 resutls. Which results corresponding to the results in paper. I found the R1 in Table5 is 31.0. It seems far from these implementation.