TQTQliu / ET-MVSNet

[ICCV 2023] When Epipolar Constraint Meets Non-local Operators in Multi-View Stereo
MIT License
56 stars 1 forks source link

Quantify the results #16

Open 05063112lcs opened 1 month ago

05063112lcs commented 1 month ago

Hello, I'm sorry to bother you again, it may be that my current level is low, and when I read your paper, I found that the quantitative results on the DTU and TNT datasets you mentioned in the paper, for the test results of the TNT dataset, I can get them by running the two test files you provided? Or do you need to take additional steps, for the data in your paper, do you get it from Tensorboard?

TQTQliu commented 1 month ago

First of all, the goal of MVS is to reconstruct point clouds, which is a two-step process: 1) generate depth maps through a neural network (i.e., the model), 2) filter and fuse depth maps into point clouds. The quantitative results presented in the paper (Table 1 and Table. 2) are all metrics evaluated for reconstructed point clouds.

Therefore, if you want to get the quantitative results, you should: 1) run the test script to get the reconstructed point clouds, for example, run the test script for DTU dataset. 2) run here to calculate the error between reconstructed point clouds and the Ground Truth. This is a Matlab-based evaluation code that all MVS methods use as a fair comparison. After running, the final quantitative results will be printed in the terminal, and will also be saved in metric.txt (the evaluation results for each scenes) and overall.txt (the average of the evaluation results for all scenes), which you can find in the corresponding output path. It is worth noting that this matlab-based code may take a long time to run (several hours), depending on different hardware conditions. More details can be found in readme.

Hope it helps. Just feel free to ask questions~

05063112lcs commented 4 weeks ago

Hello author, sorry to bother you so late! Today, I put my training results into TNT data set for testing according to what you wrote in reademe, but the final results only generated cams, mask, confidence and other relevant information of each scene under the output folder, and I did not find the txt file containing indicator information you mentioned. Is there something wrong with the way I operate? (The dtu data set is used as the training set, the trained model is placed in the blendedMVS data set for fine tuning, and the adjusted result is used as the checkpoint of TNT data set for testing)

---Original--- From: "Tianqi @.> Date: Fri, Jun 7, 2024 21:37 PM To: @.>; Cc: @.**@.>; Subject: Re: [TQTQliu/ET-MVSNet] Quantify the results (Issue #16)

First of all, the goal of MVS is to reconstruct point clouds, which is a two-step process: 1) generate depth maps through a neural network (i.e., the model), 2) filter and fuse depth maps into point clouds. The quantitative results presented in the paper (Table 1 and Table. 2) are all metrics evaluated for reconstructed point clouds.

Therefore, if you want to get the quantitative results, you should:

run the test script to get the reconstructed point clouds, for example, run the test script for DTU dataset.

run here to calculate the error between reconstructed point clouds and the Ground Truth. This is a Matlab-based evaluation code that all MVS methods use as a fair comparison. After running, the final quantitative results will be printed in the terminal, and will also be saved in metric.txt (the evaluation results for each scenes) and overall.txt (the average of the evaluation results for all scenes), which you can find in the corresponding output path. It is worth noting that this matlab-based code may take a long time to run (several hours), depending on different hardware conditions. More details can be found in readme.

Hope it helps. Just feel free to ask questions~

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

TQTQliu commented 4 weeks ago

For DTU, it provides ground-truth point clouds, so that quantitative metrics can be obtained through matlab evaluation code. But for TNT, this is an online dataset, and the ground-truth point cloud is not public, so you need to upload the reconstructed point cloud (.ply) to Tanks and Temples benchmark for evaluation. Refer here.