Closed pierotofy closed 6 years ago
I don't think PDAL has a "roughness" filter, but perhaps somebody can point out a way to do what lasclassify does in PDAL without having to extract CloudCompare's algo?
That would be great. I posted recently on odm gitter about this issue when processing a series of forest health survey images.
What kind of reporting (quality, etc.) can we glean from these changes, or the mesh in general? How can we determine quantitatively improvements to the mesh?
The change would be aimed at producing better orthophotos, not at producing better meshes (I don't think we should remove vegetation for an accurate mesh). Quantitatively it should be easy to run multiple datasets with or without the 2.5D mesh option (after enhancements are added) and compare results visually to the older poisson mesh results, or compare the input images to the resulting orthophoto for artifacts and/or distortion/blobs.
Here's a tentative filter (still need to add noise filtering, maybe decimation, or just let the CGAL code handle that):
{
"pipeline": [
{
"type":"readers.ply",
"filename":"merged.ply"
},
{
"type": "filters.smrf",
"slope": 0.15
},
{
"type": "filters.approximatecoplanar",
"knn": 10
},
{
"type": "filters.predicate",
"script": "filter_ground_plus_coplanar.py",
"function": "filter"
},
{
"type":"writers.ply",
"filename":"final.ply"
}
]
}
import numpy as np
def filter(ins,outs):
cls = ins['Classification']
cpl = ins['Coplanar']
# Keep all ground points
keep = np.equal(cls, 2)
# To those, add the ones that are coplanar
keep = keep + (np.equal(cpl, 1))
outs['Mask'] = keep
return True
Some preliminary screens:
Old:
New:
Ground + Buildings filter:
After smoothing:
w00t
Review of point cloud segmentation and classification algorithms: https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLII-2-W3/339/2017/isprs-archives-XLII-2-W3-339-2017.pdf
Original:
Ground vs segmented non-ground:
Ground plus planar surfaces segments:
Original:
Ground vs segmented non-ground:
Ground plus planar surfaces segments:
Original:
Ground vs segmented non-ground:
Ground plus planar surfaces segments:
Another cool before/after screen:
Are the last image pairs only removing vegetation? very cool
Yes!
How far do you think until we can pull this in?
Probably 1 or 2 more weeks, depending on free time. I'm removing the 2.5D Delaunay triangulation to go back to a poisson reconstruction (without trees) since from my tests, mvs_texturing seems to find better faces when using poisson (all other things being equal). This calls for a simplification of the code base, removal of the odm_25dmeshing module and addition of the new changes to odm_meshing.
Current mesh:
New:
Current mesh:
New:
Current mesh:
New:
On a slightly different note, @dakotabenjamin is there a reason why we've been using the dense point cloud from OpenSfM instead of the sparse one for orthophoto generation? I'm getting some good results by using the sparse output and 2.5D meshing (which doesn't have much vegetation).
The bonus would be that people that just want an orthophoto (agriculture) don't need to do a dense reconstruction.
No reason, let's try it!
@pierotofy if you don't already have some code written for that, I could work on sparse -> ortho today.
Started working on a branch today; I've been mostly focused on slimming the 2.5D mesh module at this point. See https://github.com/pierotofy/OpenDroneMap/tree/sparse
Changes to scripts/odm_meshing.py
in that branch need to be reverted, but we'll probably still need to run spmf from PDAL to segment ground vs non ground. Finding that excessive smoothing leads to poor building textures.
Could use some help in setting up the pipeline with ecto!
Yeah that's what I was thinking
How are you getting the sparse output in usable format?
First make sure min-num-frames is set to something higher than 4000. I'm getting good results with 12000 or 20000. Otherwise there are too few points to reconstruct anything decent.
bin/opensfm_run_all <dataset>
bin/opensfm undistort <dataset>
bin/opensfm export_ply <dataset>
Note that the resulting PLY has camera points.
For texturing we also need the nvm file:
bin/opensfm export_visualsfm <dataset>
I'm going to avoid changing any of the ecto pipe because (1) I plan to remove it in the near future and (2) opensfm.py has both sparse and dense reconstruction so very little actually changes in the pipeline. It can be done with flags similar to --use-pmvs.
Drawbacks is that it makes opensfm.py more unreadable than it already is.
See: https://github.com/dakotabenjamin/OpenDroneMap/tree/sparse
This should solve the camera positions: https://github.com/mapillary/OpenSfM/pull/229
The upside_down method is not relevant anymore; if the points are flipped, we'll need another way to check. I haven't delved into that yet.
I wonder if the pipeline should be rearranged in case the --sparse option is used; when a user just wants to generate an orthophoto (and doesn't care about having a 3D model, or a dense point cloud), the meshing, texturing and georeferencing steps should happen before the dense reconstruction.
In fact, since we always have a sparse reconstruction, the orthophoto should probably be always generated with the sparse reconstruction. Then we should give users the option to continue processing the dense point cloud and associated 3D model.
images --> sparse recon --> meshing (from sparse) --> texturing --> georeferencing --> orthophoto | optional from here, processing could stop if specified by user | --> dense recon --> meshing (from dense) --> texturing --> georeferencing --> orthophoto from dense (optional)
Thoughts?
I think this approach makes sense, particularly if the sparse mesh/orthophoto is good enough for most purposes. The dense construction is still a bit aspirational, requires additional optimization (which is a couple months away), better meshing (which doesn't have a timeline, etc..
It's a good idea. Implementation should be fairly simple.
I've pushed some of the pipeline work up until meshing on my fork, I'll see what I can do to get the above ideas
Quick question, do you envision 2.5d meshing to be run for both sparse and dense pipelines, and if not, which?
2.5D for sparse, poisson for dense. Poisson smooths the data too much for a sparse dataset (all buildings will come out severely warped). I'm currently working to improve the 2.5D meshing module to better suit a sparse dataset.
Great. I'm thinking of running the pipeline similar to how you did 2.5dmeshing (because F&%$ ecto):
runs = [{
'infile': tree.opensfm_sparse_model,
'outfile': tree.odm_mesh,
'25dmesh': tree.odm_25dmesh
}]
if args.dense:
if args.use_pmvs:
runs += [{
'infile': tree.pmvs_model,
'outfile': tree.odm_mesh_dense,
}]
else:
runs += [{
'infile': tree.opensfm_model,
'outfile': tree.odm_mesh_dense,
}]
So I'll wait on your changes and then change that up through the steps.
One possible solution to estimating point normals: create a plane from the nearest n points and compute the normal from that.
I don't think we'll need normals, but if we do there are a multitude of methods to estimate them. See http://pointclouds.org/documentation/tutorials/normal_estimation.php
Another approach which does not require code is to use https://www.pdal.io/stages/filters.normal.html
Closing this
Feel free to assign this to me, this is just a placeholder for notes and a reminder that this needs to be done at some point 😄
The goal is to further improve the quality of orthophotos. The biggest problem that has been reported multiple times by users is that of trees appearing too much like blobs. A suggested approach would be as follow: