m-bain / frozen-in-time

Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval [ICCV'21]
https://arxiv.org/abs/2104.00650
MIT License
342 stars 43 forks source link

Which results in paper correspond to the finetune command? #41

Closed akira-l closed 2 years ago

akira-l commented 2 years ago

I experiment the finetune procedure and run the command python train.py --config configs/msrvtt_4f_i21k.json.

I got:

[v2t_metrics]MSRVTT epoch 27, R@1: 16.1, R@5: 40.5, R@10 55.0, R@50 81.9MedR: 8, MeanR: 40.6
    epoch          : 27
    loss_0         : 0.7913076955540566
    val_loss_0     : 1.5775871678950295
    val_0_t2v_metrics_R1: 17.8
    val_0_t2v_metrics_R5: 40.6
    val_0_t2v_metrics_R10: 55.1
    val_0_t2v_metrics_R50: 81.5
    val_0_t2v_metrics_MedR: 8.0
    val_0_t2v_metrics_MeanR: 39.94
    val_0_t2v_metrics_geometric_mean_R1-R5-R10: 34.14804760940716
    val_0_v2t_metrics_R1: 16.1
    val_0_v2t_metrics_R5: 40.5
    val_0_v2t_metrics_R10: 55.0
    val_0_v2t_metrics_R50: 81.9
    val_0_v2t_metrics_MedR: 8.0
    val_0_v2t_metrics_MeanR: 40.5555
    val_0_v2t_metrics_geometric_mean_R1-R5-R10: 32.9772570568898
Validation performance didn't improve for 10 epochs. Training stops.

There are two R1 resutls. Which results corresponding to the results in paper. I found the R1 in Table5 is 31.0. It seems far from these implementation.

m-bain commented 2 years ago

This is finetuning from imagenet initialisation, to finetune from the pretrained model download it https://www.robots.ox.ac.uk/~maxbain/frozen-in-time/models/cc-webvid2m-4f_stformer_b_16_224.pth.tar Then add "load_checkpoint": PATH_TO_DOWNLOADED_MODEL to configs/msrvtt_4f_i21k.json

akira-l commented 2 years ago

Thanks! I will try to implement the pretaining with this checkpoint.

BTW, I have finetuned the checkpoint WebVid2M+CC3M+COCO, 4-frames, base_patch_16_224. I achieve:

    epoch          : 27
    loss_0         : 0.5593496548643809
    val_loss_0     : 1.0900163816081152
    val_0_t2v_metrics_R1: 28.4
    val_0_t2v_metrics_R5: 55.8
    val_0_t2v_metrics_R10: 67.5
    val_0_t2v_metrics_R50: 88.6
    val_0_t2v_metrics_MedR: 4.0
    val_0_t2v_metrics_MeanR: 29.498
    val_0_t2v_metrics_geometric_mean_R1-R5-R10: 47.46994959864713
    val_0_v2t_metrics_R1: 28.6
    val_0_v2t_metrics_R5: 56.2
    val_0_v2t_metrics_R10: 68.4
    val_0_v2t_metrics_R50: 88.9
    val_0_v2t_metrics_MedR: 4.0
    val_0_v2t_metrics_MeanR: 26.8585
    val_0_v2t_metrics_geometric_mean_R1-R5-R10: 47.90558524371724
Validation performance didn't improve for 10 epochs. Training stops.

Are these results correct? The R1 is slightly lower than reported values (31.0). Is this regular fluctuation?

akira-l commented 2 years ago

I got the similar results as in paper with batch_size = 64.

m-bain commented 2 years ago

run the test script test.py --resume PATH_TO_FINETUNED_CHECKPOINT --sliding_window_stride 12

the sliding window stride argument adds temporal averaging over multiple frame samples :)

akira-l commented 2 years ago

Exactly! I got further improvements with this testing script. R1 comes to 33.7.