NeurAI-Lab / biHomE

This is the official repo for the CVPR 2021 IMW paper: "Perceptual Loss for Robust Unsupervised Homography Estimation"
MIT License
37 stars 5 forks source link

MACE values #3

Closed Zekhire closed 2 years ago

Zekhire commented 2 years ago

Hello.

  1. What are the expected MACE values from the models trained with following config files:
    • config/pds-coco/zeng-bihome-lr-1e-3.yaml
    • config/pds-coco/zhang-bihome-lr-1e-2.yaml
    • config/s-coco/nguyen-orig-lr-5e-3.yam

I am doing everything as is written in the README.md and after writting following commands after training models:

python3 eval.py --config_file config/pds-coco/zeng-bihome-lr-1e-3.yaml --ckpt log/zeng-bihome-pdscoco-lr-1e-3/model_090000.pth
python3 eval.py --config_file config/pds-coco/zhang-bihome-lr-1e-2.yaml --ckpt log/zhang-bihome-pdscoco-lr-1e-2/model_090000.pth
python3 eval.py --config_file config/s-coco/nguyen-orig-lr-1e-3.yaml --ckpt log/nguyen-orig-scoco-lr-1e-3/model_090000.pth

I get the following MACE values:

Based on what you have written in the paper, the MACE should be the smallest for the zeng as hen.

  1. Also are the MACE values for Nguyen and Zhang from figure 1 from your paper received from models trained with following config files?:
    • config/s-coco/zhang-bihome-lr-1e-2.yaml
    • config/s-coco/zhang-bihome-lr-1e-2.yaml
dkoguciuk commented 2 years ago

Hi @Zekhire , thank you for the interest in our paper.

Generally, training here is unstable. I'm not sure if this is because of my implementation, or maybe because it's a general property of homography estimation learning. I encountered a ton of convergence problems while trying to reproduce the numbers from other papers and I ended up learning a couple of models, discarding outliers and calculating mean and std.

Regarding your numbers:

  1. Zeng+biHomE is just not converged. This is probably one of the outliers that happen sometimes. Are you able to start the training one more time? Can you post your learning curves from tensorboard? Normally, based on the loss during a couple of first epochs you can say that it's going to converge or not. For converged models you should get on average something around 2.11 MACE on PDS-COCO.
  2. Your Zhang+biHomE model performance is really weird, normally Zeng backbone was much better for me. I suspect it's because of much stronger supervision and/or because of stronger regularization thanks to random sampling part. Nevertheless, as stated in the paper, normally Zeng models got something around 2.11 and Zhang models something around 2.51 on PDS-COCO.
  3. Original Nguyen model on S-COCO achieving 1.57 is also really good. I have never got such a good model on S-COCO. It's even better than the best supervised model. Well, I cannot explain this to be honest. Can you try to learn it one more time?

For S-COCO to learn Nguyen you need to use config/s-coco/nguyen-orig-lr-5e-3.yaml and for Zhang you need to use config/s-coco/zhang-bihome-lr-1e-2.yaml

Please let me know if you have any more questions, Daniel

Zekhire commented 2 years ago
  1. I have trained config/s-coco/nguyen-orig-lr-5e-3.yaml one more time and get MACE: 1.62377309799194. Here is the loss function curve: W B Chart 12_6_2021, 2_11_46 PM 34

I am going to share loss curves for other models as soon as possible.

dkoguciuk commented 2 years ago

Yeah, the loss curve looks exactly as expected. What pytorch version do you use?

dkoguciuk commented 2 years ago

Hi @Zekhire ,

I'm sorry for a late reply: it turned out that I forgot to upload PhotometricHead and Nguyen config was incorrect. With the updated code I got MACE of 2.17 for Nguyen config on S-COCO dataset, which seems like a correct value (2.08 is reported in the paper). Please, check if now it works better for you.

Best, D

P.S. My env is: cuda 11.1 + pytorch 1.9.

Zekhire commented 2 years ago

Thanks for your help. I will check everything as soon as possible.

dkoguciuk commented 2 years ago

Hi @Zekhire ,

do you have any updates? :slightly_smiling_face:

Best, D

dkoguciuk commented 2 years ago

Hi @Zekhire ,

For now, I'm closing the issue, feel free to reopen it anytime :slightly_smiling_face:

Best, D