Closed truncs closed 3 months ago
Hello @truncs and thanks for your interest! Well in general, there is a problem to determine the scale of a scene with photogrammetry methods. Prior information must be leveraged as it cannot be guessed purely from triangulation. If you know the absolute depth of (at least one) view(s), or the scale of an object in the scene, you can use it to correct the output of our method, but this requires manual post-processing and is not part of our default pipeline. Note that while DUSt3R does not output absolute metric reconstructions, it is theoretically possible to train it to do so, provided the training data is metric. We verified it in our experiments, but an absolute metric DUSt3R is still to be trained and evaluated properly.
@vincent-leroy Thanks for the prompt reply! Would it work if get the matched sampled points from stereo images with known baseline? Are there plans to train a metric DUSt3R and release it?
I guess it should work if you find the right transformation to align to whatever metric prediction you have. We plan it for the next release yes
Thanks!
Would it be possible to use spase or dense matchers to get the scale out? Getting all the matched points and then using the known intrinsics to project them into 3d and then comparing the same points to pointmap in dust3r?