Closed pierotofy closed 2 years ago
Sheffield park (78 images, not 100% planar):
Incremental reconstruction time:
real 3m6.405s
user 4m15.729s
sys 0m15.910s
Planar reconstruction time:
real 2m14.561s
user 3m16.540s
sys 0m15.792s
Sunset park (69 images, mostly planar):
Incremental:
real 4m0.041s
user 7m2.639s
sys 0m18.828s
Planar:
real 0m56.732s
user 1m45.183s
sys 0m3.751s
Brighton beach (18 images, not 100% planar):
Incremental:
real 0m27.320s
user 0m45.607s
sys 0m3.248s
Planar:
real 0m18.697s
user 0m31.375s
sys 0m1.966s
Agremo dataset (195 images, AG field):
Incremental:
real 7m41.867s
user 10m25.661s
sys 0m36.215s
Planar:
real 4m2.456s
user 6m3.528s
sys 0m26.396s
I expect numbers to be even better on machines that have lots of cores, as this algorithm has less deadlocks while doing multi-threaded operations.
Future improvements could include a more robust outlier filter for degenerate camera poses by looking at points that fall outside of the main plane.
Did you experience increased memory consumption/swapping with planar?
I'm testing it on my Old_Orchard dataset against defaults and noticed that with planar all 32GB RAM have been consumed and I'm currently pushing 30GB swap, which wasn't even touched with incremental/defaults.
It shouldn't take more memory during the reconstruction step, but perhaps the reconstruction ended up degenerate and some other issue caused out of memory problems. Do you know at which step in the process the memory usage went up?
Partway through the OpenMVS phase.
Options:
auto-boundary: true, dsm: true, sfm-algorithm: planar
Dataset imported via Cloud Import - GitHub:
https://github.com/Saijin-Naib/sUAS_Photogrammetry_Suite_Test_Data/tree/trunk/datasets/OldOrchard_2017-07-22
Node:
node-odm-1 (manual)
Versions:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ac7d7ee9f0a4 opendronemap/webodm_webapp "/bin/bash -c 'chmod…" 21 hours ago Up 12 hours 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp webapp
1f099b4e659a opendronemap/webodm_webapp "/bin/bash -c '/webo…" 21 hours ago Up 12 hours worker
71289546ee79 opendronemap/nodeodm "/usr/bin/node /var/…" 21 hours ago Up 12 hours 0.0.0.0:49154->3000/tcp, :::49154->3000/tcp webodm_node-odm_1
b3c5c593a7e5 redis "docker-entrypoint.s…" 21 hours ago Up 12 hours 6379/tcp broker
d32b63bd8590 opendronemap/webodm_db "docker-entrypoint.s…" 21 hours ago Up 12 hours 0.0.0.0:49153->5432/tcp, :::49153->5432/tcp db
Correction, it might take more memory during opensfm reconstruct
; I'm currently looking at possible memory bottlenecks.. try passing --max-concurrency 1
for the time being.
I was able to get a reconstruction with defaults and max-concurrency 1
. Also this was done on CPU (no GPU processing).
Did you run with --max-concurrency 1
? You didn't observe any crazy RAM/SWAP usage?
Not with concurrency at 1
. With more than 1
, memory usage does go a bit crazy (working on that...)
Memory usage should be vastly improved with https://github.com/OpenDroneMap/ODM/pull/1455
Vastly improved is an understatement :)
Didn't even come close to touching swap.
Massive improvement in processing times as a result!
Starting small in my tests. Running a 2300 image dataset through... :smile:
And a 430 image dataset... . Ok, I've got a few worth trying.
Dang, that's super parallel...
But was the end result good also? :smile:
But was the end result good also?
Yes! Not without artifact, but that's to be expected. Where things are super smooth though, it looks like very classic orthos:
1855 processed in 02:23:53. Not too shabby. Corridor based mapping the ODM often struggles with. The 2000+ image set (more square than long and skinny) is taking a very long time at the orthophoto stage however.
Ok, 2305 images, and I can't decide if the results are more Dali or Escher for the orthophoto:
But the why is quite apparent from the point cloud:
That could by why the odm_orthophoto step too so long too. That's one complex mesh to render.
Rerunning with the following, as this is a lovely RTK dataset with 4cm accuracy data:
fast-orthophoto: true, gps-accuracy: .08, matcher-neighbors: 4, sfm-algorithm: planar
Although this accuracy is set in GPSXYAccuracy and GPSZAccuracy tags, so maybe my fix is a fools errand.
Is the terrain actually planar in this dataset? I would imagine that as an area gets wider the planar assumption starts to break down more and more (and bundle adjustment can no longer optimize a solution).
Yeah, that's the violated assumption for this large an area, for sure, but I was hoping that at a sufficiently large scale, that might be less of an issue:
It would be interesting to see if combining this with split-merge could handle it better.
Oooh. On it.
It would be interesting to see if combining this with split-merge could handle it better.
It's much closer. Something weird happened, but it's nearly complete and quite good overall:
Also, it's not quite done yet, but it's cropping the orthophoto (poor GDAL...) an here are the times as they stand now: Incremental:
Field
Final time: 14:51:40. Not too shabby. I'll try turning down the size of the submodels. That missing bit is weird, but it's too late tonight to investigate.
That missing bit is weird, but it's too late tonight to investigate.
The reason for those gaps is likely everything to do with using the default split-overlap
settings of 150m:
Now rerunning with 320m of overlap, and just for giggles, I've got the split size down to 200 so I can use more cores.
That will quite do:
It is missing the southwest corner. I haven't checked yet, but I have seen that before when cutlines fail to converge. I am re-running at a 300 image split as that will reduce the number of submodels and thus hopefully reduce the probability of missing submodels. But the gap in the middle is now filled, and the data look quite good. It seems the combo of planar reconstruction and split merge is quite suitable even for hilly areas.
Have run my first test on a dataset of 1600 images taken with P4RTK. It has processed much quicker (9:28 vs 5:49) but according to the report there is GPS error of 427m, without planar the GPS error was 0.01. What could be causing this? I'll try run it again without fast-orthophoto and see if any difference.
What could be causing this?
My guess: the estimates of error haven't been redesigned for this approach... . They might be meaningless. But, I didn't write any of the code, so this is pure hunch.
I am re-running at a 300 image split as that will reduce the number of submodels and thus hopefully reduce the probability of missing submodels.
Still gap filled:
2700 images from ebee S110 NIR, 128 threads (AMD EPYC 7662), 128Gb RAM allocated =>
5h22
Output:
Some artifacts in corner:
This corner is really bad, may be removing some pictures will help
Otherwise pretty good and usable
You might try setting split to some multi-hundred image value, say 3-400.
Works pretty well ! but it took 2x the time of standard parameters
I would expect that if you set max-concurrency
to 1
. The algorithm is heavily parallelized and will be fast only if you let it run in parallel.
I think it's been a bit slow down with split: 400 rather than max-concurrency, no ?
I tested on multiple dataset such as marine ones. Split ~300 is needed for all my tests to have proper results
I think it's been a bit slow down with split: 400 rather than max-concurrency, no ?
No: one of the major enhancements with this is maximising concurrency of use. So if you constrain it to a single thread, you remove the major performance improvement.
I tested on multiple dataset such as marine ones. Split ~300 is needed for all my tests to have proper results
I'm finding something quite similar over cities: somewhere between 2-400 seems to be a sweet spot in assuming planarityn of input data, even for a moderately hilly city.
Took a hell of lot longer to complete with planar. This is on an agricultural field that is "reletivley flat". Will retry with different settings.
This PR adds support for really fast planar reconstructions (e.g. agricultural fields)
Requirements:
Then to obtain an orthophoto really quickly, one can pass:
--sfm-algorithm planar --matcher-neighbors 4 --fast-orthophoto
If one needs a full 3D model from the mostly planar scene, one can omit the
--fast-orthophoto
flag and a full 3D reconstruction will still take place.Experimental! :boom: