NVlabs / PWC-Net

PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume, CVPR 2018 (Oral)
Other
1.63k stars 357 forks source link

Could you please provide the pretrained model with the AEPE of 2.00 on FlyingChairs? #109

Closed littlespray closed 4 years ago

littlespray commented 4 years ago

Hi,

Thank you for your great work! Now I am re-implementing the PWCNet in PyTorch. And your repository really helps me a lot!

But I still meet a problem: I trained my model for 400 epochs but it only has the AEPE of 3.9 on the train set, far away from 2.0, the AEPE reported in the original paper. So I tried the pre-trained weights you provided, pwc_net_chairs.pth.tar, and got an AEPE of 3.56.

I know this pre-trained-model is fine-tuned on FlyingThings3D, and I think maybe that's the reason why it does not work as well as the original paper. So could you offer me a pre-trained model that has the AEPE of 2.0 on FlyingChairs? Thank you very much!

littlespray commented 4 years ago

I also want to make sure that the only preprocessing is to randomly crop the images and ground truths to [384, 448], then divide the ground truths by 20? And there is no need to crop the images when calling infer(), right?

littlespray commented 4 years ago

OK, I have solved this problem. There is no problem with the model they provided. The model finetuned on FlyingThings can achieve the AEPE of 2.3. The reason why I cannot reach the same AEPE is that I use PyTorch1.4.0&cuda10.2. There are implement differences between my version and PyTorch1.0.0&cuda9.0.

JerryLeolfl commented 4 years ago

OK, I have solved this problem. There is no problem with the model they provided. The model finetuned on FlyingThings can achieve the AEPE of 2.3. The reason why I cannot reach the same AEPE is that I use PyTorch1.4.0&cuda10.2. There are implement differences between my version and PyTorch1.0.0&cuda9.0.

Hi Bro, I pretrained the model using Flyingchairs, it converged to EPE of 2.3. However, when I finetune is with flyingthings3D, with a 10% of the pretrained init learning rate and patch size 384x768, It cannot go converge, and stay at an AEPE with 39.8. Have you met whis problem? And what's your parameter setting when fine-tuning.

littlespray commented 3 years ago

OK, I have solved this problem. There is no problem with the model they provided. The model finetuned on FlyingThings can achieve the AEPE of 2.3. The reason why I cannot reach the same AEPE is that I use PyTorch1.4.0&cuda10.2. There are implement differences between my version and PyTorch1.0.0&cuda9.0.

Hi Bro, I pretrained the model using Flyingchairs, it converged to EPE of 2.3. However, when I finetune is with flyingthings3D, with a 10% of the pretrained init learning rate and patch size 384xH768, It cannot go converge, and stay at an AEPE with 39.8. Have you met whis problem? And what's your parameter setting when fine-tuning.

Hi Bro,

I didn't meet that problem. You can refer to irr's repository. I use the same settings as theirs. Hope it could help.