Beniko95J / MLF-VO

Multi-Layer Fusion Visual Odometry
GNU General Public License v3.0
20 stars 3 forks source link

About the result of the training #6

Open ZhengDonglei opened 1 year ago

ZhengDonglei commented 1 year ago

Dear officer: It seems that the final train result can t reach the result you mentioned in the paper and also the master branch .I train the model in the same manner using NVIDIA RTX 3090 and other option are the same. My question is can you provide more detail about how to training?

Beniko95J commented 1 year ago

Hi, thank you for interests.

The settings in the dev branch should be right and I think the only thing needed to change is the selected pose network. May I ask that have you checked the curves of test/t_rel and test/r_rel in tensorboard logs? The best model may not be the last one, and we choose the best model from the last 5 epochs according to the curves of test/t_rel and test/r_rel in our experiments.

SBaokun commented 1 year ago

image image thank a lot for your work! i train the model using 4090,but i find the final train result can t reach the result in your paper.especially, ATE in suquence 9.Can you give me some suggestion?thanks!

Beniko95J commented 1 year ago

It seems that the training diverges during the last epochs. The performance on Seq.09 is getting worse than the epochs before 35.You can try to decrease the learning rate again in the last 10 epochs (I remember that the default setting is 1e-4 for epoch 1--20 and 5e-5 for epoch 21--40) and search the best model in a larger range. Since the ATE largely depends on the rotation estimation, you can seach the best model based on the number of average rotational error. However, the ATE performance may still be very sensittive, which is similarily reported in here and here.

SBaokun commented 1 year ago

It seems that the training diverges during the last epochs. The performance on Seq.09 is getting worse than the epochs before 35.You can try to decrease the learning rate again in the last 10 epochs (I remember that the default setting is 1e-4 for epoch 1--20 and 5e-5 for epoch 21--40) and search the best model in a larger range. Since the ATE largely depends on the rotation estimation, you can seach the best model based on the number of average rotational error. However, the ATE performance may still be very sensittive, which is similarily reported in here and here.

Thank you for the quick reply. I have adjusted lr and results have improved.