hkchengrex / Mask-Propagation

[CVPR 2021] MiVOS - Mask Propagation module. Reproduced STM (and better) with training code :star2:. Semi-supervised video object segmentation evaluation.
https://hkchengrex.github.io/MiVOS/
MIT License
127 stars 22 forks source link

J&F performance on BL30K #32

Closed vateye closed 3 years ago

vateye commented 3 years ago

Hi, I am doing BL30K training for DAVIS 2017 val (including stage 0 and stage 1). I just want to know what J&F should I achieve on the DAVIS 2017 val after finishing BL30K training? Therefore, I can check whether my training is correct. I think it did not included in readme.

hkchengrex commented 3 years ago

It should be quite low (<70). After all, there is a large domain gap between BL30K and real images. It should quickly recover and overtake the standard static image pretrained model in stage 2 training.

So, no -- you cannot really debug it just after stage 1 training. For debugging I recommend stage 0 + stage 2 only and train the final model (0+1+2) only once.

vateye commented 3 years ago

Thanks, I just want to make sure that the my training of BL30K is correct since I only got ~57 J&F on DAVIS 2017 val after 400k iterations. Therefore, should I choose the weights with highest J&F score during the BL30K as the initialization of stage2?

hkchengrex commented 3 years ago

I did not perform any model selection (just picked the last one). If you did not modify the code/hyperparameter, the performance looks reasonable to me because it has not started annealing.

You can take the current model directly to stage 2 (and the final performance might be a bit worse, I haven't actually tried that), or finish training the final 100K iterations and move the final model to stage 2 to get the listed performance.

vateye commented 3 years ago

Okay, thanks.