castacks / DytanVO

[ICRA'23] DytanVO: Visual Odometry in Dynamic Environments
BSD 3-Clause "New" or "Revised" License
172 stars 20 forks source link

output scale factor #6

Closed curie3170 closed 1 year ago

curie3170 commented 1 year ago

When testing the network, you scaled the pose back (posenp = pose * self.pose_norm) but I don't know how to make it work while training. I tried to train using the normalized loss, but ATE diverges. I would really appreciate it if you could share the code snippet.

SecureSheII commented 1 year ago

Thank you for your interest in our work. You are right. Before we trained the posenet head, we scaled the ground truth poses by a certain constant on each dimension, so we need to scale it back when testing the model. Basically during training, you want to scale the ground truth pose by pose_norm, which is specified in https://github.com/castacks/DytanVO/blob/76ea83d6fea780bfc59f1a58416e5017d0762752/DytanVO.py#L66