Open Jasmine-tjy opened 2 months ago
Some of the parameters and method calls in our provided code are tailored for ETH3D. If you want to reproduce our results on the Tanks & Temples dataset, appropriate modifications are required. The main changes are as follows:
In the main.cpp, modify the ComputeRoundNum function, changing max_size > 1000 to max_size > 800 so that the algorithm runs for three rounds.
For the Intermediate dataset in T&T, use the RunFusion_TAT_Intermediate method from the code, and for the Advanced dataset in T&T, use the RunFusion_TAT_Advanced method (both methods are in the APD.cpp file). Since these two sub-datasets differ significantly from the ETH3D dataset, we made some appropriate adjustments based on the fusion methods used in deep learning approaches.
These should be the main differences. Also, when running T&T, use a short camera and set the depth interval to 256.
Emmm 其实很简单,因为T&T的场景太大了,而在实际参与评分的点云只占据了很小的一部分。因此有的T&T的SfM结果将相机的depth range 进行了裁切,改变了depth min 尤其是 depth max 的值,使得深度范围只占据很小一部分,这样一是能够加快计算过程,二是能够过滤掉一些不参与评分的杂点。因此你下的一些数据集里面可能会存在一些Short Camera类似的文件夹。算是为了刷分的一些小Trick吧 hahahhahahaha 大致这样解释吧 具体对分数有多大的影响我也没怎么评估过 你有兴趣的话可以测试下~
Thanks for your excellent work. I tested the project on both ETH3D dataset and tanks & temple dataset, I get similar result in ETH3D dataset, but I have tested the tanks & temple dataset for twice, the results are worse than the paper illustrated. I used the dataset as the link the APD-MVS project provided. I wonder whether there is anything wrong with this results? The result I get: F-score: Auditorium: 21.45, Ballroom: 35.46, Courtroom: 35.85, Family: 72.37, rancis: 58.69 , Horse: 46.06, Lighthouse: 64.26, M60: 57.25 , Museum: 48.11, Palace: 24.85, Panther: 57.82, Playground: 59.52, Temple: 36.19, Train: 49.12, Mean intermediate: 58.14 Mean advanced: 33.65
The result in the paper: Mean intermediate: 63.64 Mean advanced: 39.91