ActiveVisionLab / DFNet

DFNet: Enhance Absolute Pose Regression with Direct Feature Matching (ECCV 2022)
https://dfnet.active.vision
MIT License
94 stars 9 forks source link

Question about pre-trian model #13

Closed shenyehui closed 11 months ago

shenyehui commented 11 months ago

Dear authors, I have another small question. When I tested your pre-trained model on the Kings dataset, the results were very poor, with an error of about 30m. Is it normal to have significant errors when using a different environment? I am currently retraining, and fortunately, the results are gradually approaching the content in the paper.

chenusc11 commented 11 months ago

I don't think it is supposed to have such big errors. Could you please check if you have put the configuration files in the data folder?https://github.com/ActiveVisionLab/DFNet/tree/1389760f770851a77e601af1312f19fe065bd185/data/Cambridge/KingsCollege

Notice that our pre-trained model is trained in OpenGL conventions (same as NeRF). The conversion code is here: https://github.com/ActiveVisionLab/DFNet/blob/1389760f770851a77e601af1312f19fe065bd185/dataset_loaders/load_Cambridge.py#L277

shenyehui commented 11 months ago

Notice that our pre-trained model is trained in OpenGL conventions (same as NeRF). The conversion code is here:

Thank you for your reminder. I retested using your pre-trained model and obtained the following results:

Median error: 0.7324 meters and 2.3696 degrees. Mean error: 1.0838 meters and 2.5473 degrees.

Are these results acceptable? I am in the process of retraining the model, hoping to achieve better results.

shenyehui commented 11 months ago

I apologize for another question. I retrained the DFNet model and obtained the following results: Median error is 1.4927213191986084 meters and 7.708547592163086 degrees. The mean error is 2.1420175386891422 meters and 8.970675379124744 degrees. There is still some deviation from the results in the paper. Worth mentioning is that the training stopped at the 127th epoch and achieved the best results at the 107th epoch. Is there a problem during my training process, or is it normal to encounter such errors on a new device?

chenusc11 commented 11 months ago

Notice that our pre-trained model is trained in OpenGL conventions (same as NeRF). The conversion code is here:

Thank you for your reminder. I retested using your pre-trained model and obtained the following results:

Median error: 0.7324 meters and 2.3696 degrees. Mean error: 1.0838 meters and 2.5473 degrees.

Are these results acceptable? I am in the process of retraining the model, hoping to achieve better results.

Hi, it is the correct result. I think it is the same as we reported in the paper.

chenusc11 commented 11 months ago

I apologize for another question. I retrained the DFNet model and obtained the following results: Median error is 1.4927213191986084 meters and 7.708547592163086 degrees. The mean error is 2.1420175386891422 meters and 8.970675379124744 degrees. There is still some deviation from the results in the paper. Worth mentioning is that the training stopped at the 127th epoch and achieved the best results at the 107th epoch. Is there a problem during my training process, or is it normal to encounter such errors on a new device?

Not sure what happened here. There might be some randomness with new devices and seeds when using early stopping. Please try to see if using longer --patience leads to better results. Stopping at 20 epochs after the best might be too soon.

shenyehui commented 11 months ago

I apologize for another question. I retrained the DFNet model and obtained the following results: Median error is 1.4927213191986084 meters and 7.708547592163086 degrees. The mean error is 2.1420175386891422 meters and 8.970675379124744 degrees. There is still some deviation from the results in the paper. Worth mentioning is that the training stopped at the 127th epoch and achieved the best results at the 107th epoch. Is there a problem during my training process, or is it normal to encounter such errors on a new device?

Not sure what happened here. There might be some randomness with new devices and seeds when using early stopping. Please try to see if using longer --patience leads to better results. Stopping at 20 epochs after the best might be too soon.

Thank you! I will try it later.