OpenDroneMap / ODM

A command line toolkit to generate maps, point clouds, 3D models and DEMs from drone, balloon or kite images. 📷
https://opendronemap.org
GNU Affero General Public License v3.0
4.87k stars 1.1k forks source link

Improve 2.5D mesh and orthophoto #640

Closed pierotofy closed 6 years ago

pierotofy commented 7 years ago

Feel free to assign this to me, this is just a placeholder for notes and a reminder that this needs to be done at some point 😄

The goal is to further improve the quality of orthophotos. The biggest problem that has been reported multiple times by users is that of trees appearing too much like blobs. A suggested approach would be as follow:

pierotofy commented 7 years ago

I don't think PDAL has a "roughness" filter, but perhaps somebody can point out a way to do what lasclassify does in PDAL without having to extract CloudCompare's algo?

PeterSprague commented 7 years ago

That would be great. I posted recently on odm gitter about this issue when processing a series of forest health survey images.

dakotabenjamin commented 7 years ago

What kind of reporting (quality, etc.) can we glean from these changes, or the mesh in general? How can we determine quantitatively improvements to the mesh?

pierotofy commented 7 years ago

The change would be aimed at producing better orthophotos, not at producing better meshes (I don't think we should remove vegetation for an accurate mesh). Quantitatively it should be easy to run multiple datasets with or without the 2.5D mesh option (after enhancements are added) and compare results visually to the older poisson mesh results, or compare the input images to the resulting orthophoto for artifacts and/or distortion/blobs.

pierotofy commented 7 years ago

Here's a tentative filter (still need to add noise filtering, maybe decimation, or just let the CGAL code handle that):


{
  "pipeline": [
    {
      "type":"readers.ply",
      "filename":"merged.ply"
    },
    {
      "type": "filters.smrf",
      "slope": 0.15
    },
    {
      "type": "filters.approximatecoplanar",
      "knn": 10
    },
    {
      "type": "filters.predicate",
      "script": "filter_ground_plus_coplanar.py",
      "function": "filter"
    },
    {
      "type":"writers.ply",
      "filename":"final.ply"
    }
  ]
}
import numpy as np

def filter(ins,outs):
  cls = ins['Classification']
  cpl = ins['Coplanar']

  # Keep all ground points
  keep = np.equal(cls, 2)

  # To those, add the ones that are coplanar
  keep = keep + (np.equal(cpl, 1))

  outs['Mask'] = keep
  return True
pierotofy commented 7 years ago

Some preliminary screens:

Old: image

New: image

Ground + Buildings filter:

image

pierotofy commented 7 years ago

After smoothing:

image

smathermather commented 7 years ago

w00t

pierotofy commented 7 years ago

Review of point cloud segmentation and classification algorithms: https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLII-2-W3/339/2017/isprs-archives-XLII-2-W3-339-2017.pdf

pierotofy commented 7 years ago

Original:

image

Ground vs segmented non-ground:

image

Ground plus planar surfaces segments:

image

Original:

image

Ground vs segmented non-ground:

image

Ground plus planar surfaces segments:

image

Original:

image

Ground vs segmented non-ground:

image

Ground plus planar surfaces segments:

image

Another cool before/after screen:

image

image

dakotabenjamin commented 6 years ago

Are the last image pairs only removing vegetation? very cool

pierotofy commented 6 years ago

Yes!

dakotabenjamin commented 6 years ago

How far do you think until we can pull this in?

pierotofy commented 6 years ago

Probably 1 or 2 more weeks, depending on free time. I'm removing the 2.5D Delaunay triangulation to go back to a poisson reconstruction (without trees) since from my tests, mvs_texturing seems to find better faces when using poisson (all other things being equal). This calls for a simplification of the code base, removal of the odm_25dmeshing module and addition of the new changes to odm_meshing.

pierotofy commented 6 years ago

Current mesh:

image

New:

image

Current mesh:

image

New:

image

Current mesh:

image

New:

image

pierotofy commented 6 years ago

On a slightly different note, @dakotabenjamin is there a reason why we've been using the dense point cloud from OpenSfM instead of the sparse one for orthophoto generation? I'm getting some good results by using the sparse output and 2.5D meshing (which doesn't have much vegetation).

image

The bonus would be that people that just want an orthophoto (agriculture) don't need to do a dense reconstruction.

dakotabenjamin commented 6 years ago

No reason, let's try it!

dakotabenjamin commented 6 years ago

@pierotofy if you don't already have some code written for that, I could work on sparse -> ortho today.

pierotofy commented 6 years ago

Started working on a branch today; I've been mostly focused on slimming the 2.5D mesh module at this point. See https://github.com/pierotofy/OpenDroneMap/tree/sparse

pierotofy commented 6 years ago

Changes to scripts/odm_meshing.py in that branch need to be reverted, but we'll probably still need to run spmf from PDAL to segment ground vs non ground. Finding that excessive smoothing leads to poor building textures.

pierotofy commented 6 years ago

Could use some help in setting up the pipeline with ecto!

dakotabenjamin commented 6 years ago

Yeah that's what I was thinking

dakotabenjamin commented 6 years ago

How are you getting the sparse output in usable format?

pierotofy commented 6 years ago

First make sure min-num-frames is set to something higher than 4000. I'm getting good results with 12000 or 20000. Otherwise there are too few points to reconstruct anything decent.

bin/opensfm_run_all <dataset> bin/opensfm undistort <dataset> bin/opensfm export_ply <dataset>

Note that the resulting PLY has camera points.

For texturing we also need the nvm file:

bin/opensfm export_visualsfm <dataset>

dakotabenjamin commented 6 years ago

I'm going to avoid changing any of the ecto pipe because (1) I plan to remove it in the near future and (2) opensfm.py has both sparse and dense reconstruction so very little actually changes in the pipeline. It can be done with flags similar to --use-pmvs.

Drawbacks is that it makes opensfm.py more unreadable than it already is.

See: https://github.com/dakotabenjamin/OpenDroneMap/tree/sparse

dakotabenjamin commented 6 years ago

This should solve the camera positions: https://github.com/mapillary/OpenSfM/pull/229

pierotofy commented 6 years ago

The upside_down method is not relevant anymore; if the points are flipped, we'll need another way to check. I haven't delved into that yet.

pierotofy commented 6 years ago

I wonder if the pipeline should be rearranged in case the --sparse option is used; when a user just wants to generate an orthophoto (and doesn't care about having a 3D model, or a dense point cloud), the meshing, texturing and georeferencing steps should happen before the dense reconstruction.

In fact, since we always have a sparse reconstruction, the orthophoto should probably be always generated with the sparse reconstruction. Then we should give users the option to continue processing the dense point cloud and associated 3D model.

images --> sparse recon --> meshing (from sparse) --> texturing --> georeferencing --> orthophoto | optional from here, processing could stop if specified by user | --> dense recon --> meshing (from dense) --> texturing --> georeferencing --> orthophoto from dense (optional)

Thoughts?

smathermather commented 6 years ago

I think this approach makes sense, particularly if the sparse mesh/orthophoto is good enough for most purposes. The dense construction is still a bit aspirational, requires additional optimization (which is a couple months away), better meshing (which doesn't have a timeline, etc..

dakotabenjamin commented 6 years ago

It's a good idea. Implementation should be fairly simple.

I've pushed some of the pipeline work up until meshing on my fork, I'll see what I can do to get the above ideas

dakotabenjamin commented 6 years ago

Quick question, do you envision 2.5d meshing to be run for both sparse and dense pipelines, and if not, which?

pierotofy commented 6 years ago

2.5D for sparse, poisson for dense. Poisson smooths the data too much for a sparse dataset (all buildings will come out severely warped). I'm currently working to improve the 2.5D meshing module to better suit a sparse dataset.

dakotabenjamin commented 6 years ago

Great. I'm thinking of running the pipeline similar to how you did 2.5dmeshing (because F&%$ ecto):

        runs = [{
            'infile': tree.opensfm_sparse_model,
            'outfile': tree.odm_mesh,
            '25dmesh': tree.odm_25dmesh
        }]

        if args.dense:
            if args.use_pmvs:
                runs += [{
                    'infile': tree.pmvs_model,
                    'outfile': tree.odm_mesh_dense,
                }]
            else:
                runs += [{
                    'infile': tree.opensfm_model,
                    'outfile': tree.odm_mesh_dense,
                }]

So I'll wait on your changes and then change that up through the steps.

dakotabenjamin commented 6 years ago

One possible solution to estimating point normals: create a plane from the nearest n points and compute the normal from that.

pierotofy commented 6 years ago

I don't think we'll need normals, but if we do there are a multitude of methods to estimate them. See http://pointclouds.org/documentation/tutorials/normal_estimation.php

pierotofy commented 6 years ago

Another approach which does not require code is to use https://www.pdal.io/stages/filters.normal.html

dakotabenjamin commented 6 years ago

Closing this