Open vivekmistry opened 2 years ago
setting resolution level to 0 that forced the densification to use the full image resolution; currently the pipeline is processing the entire image at once; in order to process very large image resolutions, like satellite images, the pipeline should be modified to split each image in smaller resolution virtual images and that processing them at the native resolution without running OOM
Is there any reference for this suggested solution?
Or is there a way to increase the variable limit at the Depth Estimation Step so that it doesn't run out of memory?
And these are stereo images and each has its flight coordinates spec and it is not possible to split them in small resolution.
For satellite images, ODM might not be the best solution at the moment; check out danesfield instead: https://github.com/Kitware/Danesfield
Share your results if you have success with it.
Describe the bug Using ODM(OpenDroneMap) in container, we want to generate DSM & OrthoPhotos from StereoImages(size of Image is > 700MB). In the final result for both DSM & Orthophotos, the image has large sets of deformed shapes.
To Reproduce Steps to reproduce the behavior:
For each image, the .dmap of size approx. 5GB get generated and in while trying to calculate Estimated depth-maps
"Command line: /datasets/project/opensfm/undistorted/openmvs/scene.mvs --resolution-level 0 --min-resolution 3307 --max-resolution 26460 --max-threads 48 --number-views-fuse 2 -w /datasets/project/opensfm/undistorted/openmvs/depthmaps -v 0 --geometric-iters 0"
It fails with an error - ran out of memory. (failed for both 192 GB & 264 GB RAM machines)
We have checked, the generated ".dmap" file for "resolution-level 0 and 1" in Point Cloud, and observation is the results with resolution-level is much better.
Can you please advise, how to execute DensifyPointCloud with resolution level 0 without memory leak issue?