Open Yajiang opened 11 months ago
I think you aligned the sparse point clouds with the lidar scan to compute the F1 score. I wonder if it may relate to the SfM results' point density. Can you provide more details on the metrics between the incremental reconstruction and DAGSfM? e.g. the projection errors, number of recovered camera poses, and 3D points. Moreover, you may also need to show the reconstruction results through the modified GUI (The splitter camera poses and sparse point clouds) to inspect the visual artifacts.
I think you aligned the sparse point clouds with the lidar scan to compute the F1 score. I wonder if it may relate to the SfM results' point density. Can you provide more details on the metrics between the incremental reconstruction and DAGSfM? e.g. the projection errors, number of recovered camera poses, and 3D points. Moreover, you may also need to show the reconstruction results through the modified GUI (The splitter camera poses and sparse point clouds) to inspect the visual artifacts.
Actually, we aligned the reconstructed dense pointclouds with lidar pointcloud to compute the F1 score. It mixed the SfM results with dense reconstruction. So it's a bit difficult to compare. Here is our dense pointcloud results. As you can see, they share the same world coordinates, but at some point, the distributed reconstruction result drift a bit.
Here are additional info: The BA result: upper one is global BA of distributed reconstruction, another is incremental reconstruction. The info of distributed reconstruction:
I think you aligned the sparse point clouds with the lidar scan to compute the F1 score. I wonder if it may relate to the SfM results' point density. Can you provide more details on the metrics between the incremental reconstruction and DAGSfM? e.g. the projection errors, number of recovered camera poses, and 3D points. Moreover, you may also need to show the reconstruction results through the modified GUI (The splitter camera poses and sparse point clouds) to inspect the visual artifacts.
Actually, we aligned the reconstructed dense pointclouds with lidar pointcloud to compute the F1 score. It mixed the SfM results with dense reconstruction. So it's a bit difficult to compare. Here is our dense pointcloud results. As you can see, they share the same world coordinates, but at some point, the distributed reconstruction result drift a bit.
I think it may be sourced from the inaccuracy of the final alignment step of our DAGSfM. As pointed out in our recent paper AdaSfM: From Coarse Global to Fine Incremental Adaptive Structure from Motion, DAGSfM may suffer from its final alignment stage, especially when matching outliers exist. AdaSfM solved this by introducing priors from global SfM. If you already have lidar scans, then you already have global priors. Then I would like to suggest align the SfM results to the coordinate frame of the lidar scans.
Here are additional info: The BA result: upper one is global BA of distributed reconstruction, another is incremental reconstruction. The info of distributed reconstruction:
The scene looks not such large. To make the reconstruciton better, you may try to increase the upperbound from 500 to 700, which decreases the block number from 3 to 2.
Here are additional info: The BA result: upper one is global BA of distributed reconstruction, another is incremental reconstruction. The info of distributed reconstruction:
Could you visualize the camera poses with the modified GUI in DAGSfM since it can clearly show which camera poses belong to the same cluster.
Here are additional info: The BA result: upper one is global BA of distributed reconstruction, another is incremental reconstruction. The info of distributed reconstruction:
The scene looks not such large. To make the reconstruciton better, you may try to increase the upperbound from 500 to 700, which decreases the block number from 3 to 2.
Yes, I'm trying it now. Later I'll shared the result with you. Thanks
Here are additional info: The BA result: upper one is global BA of distributed reconstruction, another is incremental reconstruction. The info of distributed reconstruction:
The scene looks not such large. To make the reconstruciton better, you may try to increase the upperbound from 500 to 700, which decreases the block number from 3 to 2.
The good news is that the F-score is almost the same as incremental after I changed the block number from 3 to 2. But the BA cost time is more than incremental reconstruction (I only tested once here).
Here are additional info: The BA result: upper one is global BA of distributed reconstruction, another is incremental reconstruction. The info of distributed reconstruction:
Could you visualize the camera poses with the modified GUI in DAGSfM since it can clearly show which camera poses belong to the same cluster.
where can I find the guideline about the visualization process ?
Here are additional info: The BA result: upper one is global BA of distributed reconstruction, another is incremental reconstruction. The info of distributed reconstruction:
The scene looks not such large. To make the reconstruciton better, you may try to increase the upperbound from 500 to 700, which decreases the block number from 3 to 2.
The good news is that the F-score is almost the same as incremental after I changed the block number from 3 to 2. But the BA cost time is more than incremental reconstruction (I only tested once here).
The speed of DAGSfM should be comparable to the original COLMAP when the scene scale is not large (e.g. less than 3000 images in aerial scenes or 2000 images for sequential dataset). It can outperform COLMAP when processing 5000 images or even larger (you can refer to some issues that someone used DAGSfM to reconstruct scenes with about 100K images). Moreover, you can run DAGSfM in distributed mode to improve the performance.
Here are additional info: The BA result: upper one is global BA of distributed reconstruction, another is incremental reconstruction. The info of distributed reconstruction:
Could you visualize the camera poses with the modified GUI in DAGSfM since it can clearly show which camera poses belong to the same cluster.
where can I find the guideline about the visualization process ?
I think you used the original COLMAP's GUI for visualization. I made minor modifications to COLMAP's GUI, where each image has an additional cluster_id
property. Therefore, images that belong to the same group will be rendered in the same color. You should use the GUI of DAGSfM and import your model as using COLMAP's GUI.
Thanks for your greate work. We tested your work on our own data. We use lidar pointcloud as our groundtruth. In our own data test, we found that the distributed reconstruction result had lower F-score compared with incremental reconstruction. Could you please give any advice?