Closed AlanIIE closed 3 years ago
That's weird. What's the versions of your dependencies (Pytorch etc.)?
Thanks for your quick reply. I trained on the dependencies: python==3.7 CUDA=10.0, torch==1.2.0, torchvision==0.4.0, spatial-correlation-sampler==0.2.0
I have the same question too. Maybe we should change dataloader code to load more images as reference ?
Thanks for your quick reply. I trained on the dependencies: python==3.7 CUDA=10.0, torch==1.2.0, torchvision==0.4.0, spatial-correlation-sampler==0.2.0
You may need the same versions as specified in readme. There are some compatibility issue, e.g. I found pytorch 1.4 doesn't work.
@zlai0 Thank you for sharing your excellent work! But I have the same problem with @AlanIIE. With the given code and default setting, I just got Js = 0.392 . Do I need add multi-frame training ?
@zlai0 Thank you for sharing your excellent work! But I have the same problem with @AlanIIE. With the given code and default setting, I just got Js = 0.392 . Do I need add multi-frame training ?
pytorch 1.2 also does not work. I use pytorch 1.1.0 and spatial-correlation-sampler 0.0.8, successfully reproduce the results in the paper. That's so weird!!
Very perfect work!
I admit that the test results on the provided pre-trained model are the same as the released ones. However, with the given codes, I cannot train a model with similar performance on the YouTube-VOS dataset on my own. My model just got Js=0.405 and Fs=0.481 with 30 epochs' training. What's the problem?
When I check the training model in main.py (Ln 184), I find it's only trained on pairwise data. Could you please release the code of the long and short term memory as told in the paper? Is it the reason why I cannot achieve a higher score?