Open omnipotenttom opened 2 weeks ago
By the way, do these references in the paper have the same meaning as MAE, 3-interval, 0.6m in rednet?
Because rednet refers to pixels, but your paper seems to refer to grids. Are they equal? Or is it the reconstructed results used in this paper rather than the depth map calculation metrics?
Because rednet refers to pixels, but your paper seems to refer to grids. Are they equal? Or is it the reconstructed results used in this paper rather than the depth map calculation metrics?
Your statement is correct. The evaluation metrics used in Ada-MVS differ slightly from those in RED-Net. RED-Net evaluates the accuracy of depth maps, while Ada-MVS evaluates the accuracy of the reconstructed DSM results, rather than the depth map. The depth map accuracy of Ada-MVS on the WHU-OMVS test set is as follows: MAE (m) PAG-D0.1m (%) PAG-D0.3m (%) PAG-D0.6m (%) 0.158 57.89 90.62 96.80 You can refer to this for further details.
Thank you so much for your great work and your responses! I also want to ask another question about REDnet. I hope you can give me an answer. The paper mentioned that batchsize=1 trained 3 epochs on WHU-MVS, and the total number of iterations was about 150k. However, when batchsize=1, each epoch was 4320 iterations, totaling about 12k. I didn’t quite understand it. In addition, I reproduced the REDNet-pytorch you posted, but the results were not as good as those in the paper. The default parameters were used, and I noticed that intervalscale=2.0, while the tensorflow version was 1.0. Can you tell me the reason?
I tested on WHU-5, batchsize=1, iteration=21600/5=4320 for each train epoch, iteration=6800/5=1360 for each test epoch
Thank you for your great work! Are there some test metrics about adamvs? Something like MAE, 3-interval ,0.6m. If so, please tell me your official test results.