OpenDroneMap / ODM

A command line toolkit to generate maps, point clouds, 3D models and DEMs from drone, balloon or kite images. 📷
https://opendronemap.org
GNU Affero General Public License v3.0
4.8k stars 1.09k forks source link

Increasing point cloud density #565

Closed CarlosGrohmann closed 5 years ago

CarlosGrohmann commented 7 years ago

Consider the products created by these two runs of ODM:

1) python /odm_app/OpenDroneMap/run.py \
--project-path /odm_data/ \
--images /odm_data/mission_01/ \
--force-ccd 6.17 \
mission1_defaults

2 )python /odm_app/OpenDroneMap/run.py \
--project-path /odm_data/ \
--images /odm_data/mission_01/ \
--force-ccd 6.17 \
--resize-to 3999 \
--min-num-features 10000 \
mission1_tweaking

For the 'defaults' project, we have:

odm_georeferenced_model.csv - 151 MB - 4,648,506 points
odm_georeferenced_model.las - 151 MB - 4,648,506 points

And for the 'tweaking' project:

odm_georeferenced_model.csv - 146 MB - 4,499,211 points
odm_georeferenced_model.las - 146 MB - 4,499,211 points

So regardless of resizing and minimum number of features detected, the two products are quite similar.

How can we control the density of points in these products? (we have options for meshes, but not for point clouds)

dakotabenjamin commented 7 years ago

So here's some explanation. --min-num-features will tell the software how many features to extract from the image for matching. Increasing that number may improve the matching accuracy and create a better sparse point cloud, but will have no effect on the dense point cloud generation (other than it maybe being more accurate). With regards to resize- this just proves that it is useless and should be removed- we should be using the full-sized images.

Currently there are no options for controlling the size/density of the dense point cloud. One option that could be exposed is depth range, but I'm not familiar enough with the patchmatch algorithm to know what exactly that would affect. Right now that range is estimated based on the reconstruction (sparse cloud)

pierotofy commented 7 years ago

I think OpenSfM tries to match as many points as possible, so we already have an upper bound. We could apply a filtering / denoising step to reduce the number of points.

dakotabenjamin commented 7 years ago

I think the point is to save time/energy, for example to run preliminary tests or for smaller machines.

pierotofy commented 7 years ago

Ah I see. I have looked time ago in OpenSfM the ability to generate a sparse point cloud, but I don't think that's currently an option. It would be a good feature though.

CarlosGrohmann commented 7 years ago

If OpenSfM is deriving a point cloud that is as dense as possible, that's great (how can we be sure of that?). But having some control over this is also very nice. In PhotoScan, you can process your dataset with 'Lowest', 'Low', 'Medium', 'High' and 'Highest' quality. This allows you to run thing very fast, even for large datasets, and it's very useful if you are in a field trip and need to check if the data was collected properly, or if you need additional flights. Another nice thing is that you can run the process in parts. You can run just the camera alignment, and see how your images are distributed in space, for instance.

I wouldn't remove --resize-to yet. Like I posted in #562, I could process 99 images with default parameters (resizing from 4000 to 2400) but not with images at their original size.

CarlosGrohmann commented 7 years ago

From the Photoscan forum:

The Dense Cloud quality setting controls at which size the source images are used: Highest = full size High = Downscaled to 50% Medium = Downscaled to 25% Low = Downscaled to 12,5% Lowest = Downscaled to 6,75%

CarlosGrohmann commented 7 years ago

I run some tests with Photoscan today, using the same 99 images:

Align cameras Tie Points Build Dense Cloud Points
(quality) (quality)
Lowest 7,832 Lowest 324,309
Low 41,171 Low 1,371,760
Medium 59,044 Medium 5,450,140
High 56,195 High 20,731,966

I didn't went up to Highest quality for lack of time, but we can see that the 4.6 M points from ODM are close to the Medium quality.

KommandorKeen commented 7 years ago

So we should be able to close this one. Quite clearly AGisoft resizes the images prior to point cloud generation. This controls the size of the point cloud. ODM has that functionality and it works in the same way. We might want to change the interface to be 'divide original image dimensions by:' reather thane resize to. but the functionality is there.

KommandorKeen commented 7 years ago

Can we close this?

kikislater commented 7 years ago

I don't think so ! How to get similar point cloud similar to what we get with high settings in photoscan. High setting in photoscan is +/- the same as medium in pix4d which equal to half resize pixel square

KommandorKeen commented 7 years ago

I think we can, we have the tools already.

I think I used a fixed camera calibration in Photoscan.

If you can tell me where to turn off the auto calibration and insert a fixed set of camera parameters I will re-run the original data and see what it looks like.

Simon

dakotabenjamin commented 7 years ago

I found where that is, it's not implemented in ODM yet but I'm going to push it to this PR soon: #644

dakotabenjamin commented 7 years ago

I think we can expose more depth mapping params that will help solve this: #662

MatthiasSiewert commented 5 years ago

What hasn't been pointed out in this discussion is, that a reduced image scale can improve the generation of the point cloud for very noisy, blurry or vegetated imagery. It can be easier for the software to recognize patterns at lower resolutions. Hence the expression to not to see the wood for the trees. Imaging to find a specific point in a very dense thicket of shrubs. It actually gets easier when you pinch your eyes to recognize a specific larger blob of green.

The pix4d manual also gives some background reasoning on this option: https://support.pix4d.com/hc/en-us/articles/202557759-Menu-Process-Processing-Options-1-Initial-Processing-General

However, I found that increasing the amount of point features, is a safer way of getting better results, though computationally more intense.

smathermather commented 5 years ago

Good points @MatthiasSiewert. We see this across packages, e.g.: https://github.com/dronemapper-io/NodeMICMAC/issues/14

Also, we've done a lot on this issue, especially with SMVS and then MVE additions to the toolchain. More will happen with respect to point cloud densities, but I think we can safely close this.