alicevision / Meshroom

3D Reconstruction Software
http://alicevision.org
Other
11.11k stars 1.08k forks source link

Best settings for large scale reconstruction #549

Closed ToddVG closed 5 years ago

ToddVG commented 5 years ago

I am trying to free up as much Memory and processing power i can to run Alicevision/Meshroom, i only use my windows 10 Pro machine with 32GB of ram, i7-3770, GForce GTX-1070 for Photogammetry work so i only want to load the services that i must have to run Meshroom, connect to internet and that is it. anyone have any advice?

natowi commented 5 years ago

Your computer specs are good. What exactly is your question?

ToddVG commented 5 years ago

just wanting it to run as fast and as little problems as possible. I do this just a a hobby, and i am playing around with doing an entire model of my downtown area. My first flight with the drone i took 1400 pictures, ran them thru ver 2019.1 and it just kept giving me issues, i have lowerd the points in Meshing, I have downscaled to 1 in depthmap. i then went to only adding 100 pictures at a time and still had some issues. i am now using version 2018.1 running 100 photos at a time, and i was just checking on here to see if i freed up as much processing and memory as possible if it might help

natowi commented 5 years ago

You have to increase the depth map downscale factor (e.g. to 4) to boost computation time. Depending on how you capture the images, your could reduce the overlap (higher time interval between taking the images = fewer images). With the upcoming release some problems with 2019.1 will be fixed.

skinkie commented 5 years ago

My first flight with the drone i took 1400 pictures,

Are your photo's GPS tagged?

ToddVG commented 5 years ago

Yes they are. DJI Mavic 2 Pro,

I was able to come up with a model with Drone Deploy. But not a lot I can do with it after that. With Meshroom I then can trim it in MeshLab then play with it in Blender.

I really must be missing something, just not able to get a final

skinkie commented 5 years ago

@ToddVG this might be of interest: https://openmvg.readthedocs.io/en/latest/software/Geodesy/geodesy/#use-case-command-line-used-for-a-flat-uav-survey

ToddVG commented 5 years ago

Skinkie

So are you thinking becuase of the size of my project that I might be better off using OpenMVG? or is OpenMVG an add to Meshroom?

skinkie commented 5 years ago

@ToddVG AliceVision/Meshroom in part is a fork of OpenMVG, but I don't know if this feature is added.

ToddVG commented 5 years ago

have you worked with the Alicevision/Meshroom? is there a file that I could cut and paste on here that would allow all to see my settings I am currently running this project with? I am making small changes each time to different Nodes. Like

1- DepthMap (Downscale from 2 to now 4) ( Refine # of Samples to 71) (Refine # of Depths to 15) (Refine # Iterations to 61) Refine Nb Neighbour Cameras to 3) 2- DepthMapFilter (Number of Nearest Cameras from 10 to 3) & ( Min Consistent Cameras Bad Similarity to 3) 3- Meshing ( Max Input Points from 50,000,000 to 500,000) & (Max Points 5,000,000 to 500,000) 4- MeshFiltering ( Removed check at "Keep only the Largest Mesh) 5- Texture (Switch from PNG to JPG at the Text file type)

I am looking for any guidance on doing project of this size, it is my plan to do a whole 10 block section of our city.

ToddVG commented 5 years ago

@Natowi would love to hear what you think on this

natowi commented 5 years ago

So are you thinking becuase of the size of my project that I might be better off using OpenMVG? or is OpenMVG an add to Meshroom?

Meshroom is based on a fork of OpenMVG and has changed over time. For some cases, OpenMVG offers more options at the moment, as they are not yet implemented in Meshroom. One of the features not yet implemented in Meshroom is GPS and drone imagery optimized support.

Is there a file that I could cut and paste on here that would allow all to see my settings I am currently running this project with?

The settings are stored in the project.mg file which can be opened as text file. But I don´t think it would be of any use to share the mg file.


I would recommend you to start with the lowest settings and a small portion of your dataset (~50-100 images) and see if the results check out. Depending on the computation time you can optimize the settings some more. Read this for details

I would duplicate and fork the graph for each modified node to check for errors in following nodes.

Texturing To speed up texturing, you could also downscale the texture and reduce the texture size.

In FeatureExtraction you can try disabling the ForceCPU Extraction option to use your GPU.

You could also use Draft-Meshing for a quick preview reconstruction.

ToddVG commented 5 years ago

With all the settings I changed, and going to ver 2018.1 it looks I am going to make it all the way threw this run. On Texture now.

Wondering if it was the changes or 2018.1? Can I run both ver on the same machine? Test it

skinkie commented 5 years ago

Can I run both ver on the same machine? Test it

You can run both versions.

natowi commented 5 years ago

...And there are some issues with large datasets in 2019.1 (will be fixed in 2019.2)

ToddVG commented 5 years ago

Have any feel on when 2019.2 will be released?

Once Texturing is completed, I am going to run the whole dataset on 2019.1 I like the idea of using Draft-Meshing. I also like the idea of duplicating the nodes I change to see what the main issue seemed to be.

Any help on how to do that would be appreciated

ToddVG commented 5 years ago

Watching log while texturing continues, I see a lot

Camera 760/1360 (0 triangles) Camera 761/1360 (0 triangles) There are tons of (0 triangles) with a few 8 or 10 or 24 and so on triangles)

Have I wasted my time waiting for the final product? Or is that to be expected?

natowi commented 5 years ago

Did you check the Meshing node output in the 3d viewer?

ToddVG commented 5 years ago

I am running 2018.1,

natowi commented 5 years ago

Open the Meshing folder and check the obj in meshlab

ToddVG commented 5 years ago

Open it, all the triangles are bigger, can’t really tell what it is.

ToddVG commented 5 years ago

Here is a pic of my Monitor showing the Obj from the meshing folder and the other pic is the Monitor showing the 3D viewer in meshroom

https://1drv.ms/f/s!Aqxi3rm5OtGihIQs6GWrRhbDMNy-3A

natowi commented 5 years ago

I think you need to increase your Meshing Max Input Points value. Since you have a large scene, so the Max point value might be too low for your large model to reconstruct a dense /accurate mesh.

ToddVG commented 5 years ago

I have reduced the number of pictures to 25% of the mapped area. 404 pictures, with only 8 with red.

Downscale = 4, SGM Nb cameras 6 and Refine at 4. Stopped at DepthMap.

Red next to 5 in the chunks portion of the node.

Here is my DepthMap Log Program called with the following parameters:

[17:30:36.200778][info] CUDA-Enabled GPU. Device information:

[17:30:36.446625][info] Supported CUDA-Enabled GPU detected. [17:30:37.010276][info] Found 1 image dimension(s): [17:30:37.011276][info] - [5472x3648] [17:30:37.649881][info] Overall maximum dimension: [5472x3648] [17:30:37.649881][info] Create depth maps. number of CUDA devices: 1 0: GeForce GTX 1070 [17:30:37.650880][info] # GPU devices: 1, # CPU threads: 8 [17:30:37.650880][info] Plane sweeping parameters:

natowi commented 5 years ago

I sometimes get the same red bar error. So far, increasing the downscale factor resolved it. (and so does lowering SGM: Nb Neighbour Cameras, Refine: Nb Neighbour Cameras)

I guess this is related to the available GPU memory: Downscale 1: red bar in first chunk Downscale 2: total size of volume map in GPU memory 1200 x8 chunks = 9.6gb GPU memory required, red bar in second chunk Downscale 4: total size of volume map in GPU memory 300 x8 chunks = 2.4gb GPU memory required works

Total size of volume map is the sum of the following entries:

Device 0 memory - used: 1651.150024, free: 6540.850098, total: 8192.000000
total size of volume map in GPU memory: **63.755859**
...

One thing curious is, this problem does not occur only large datasets as one would expect, but also on small datasets (22 images). And it works on some with total size of volume map in GPU memory 1000 x24 chunks... Also, while running, only ~1GB GPU memory is being used.

ToddVG commented 5 years ago

What would you do if you were me

natowi commented 5 years ago

Try it with downscale 8 or split the dataset in smaller chunks and augment reconstruction

ToddVG commented 5 years ago

Well I am now on my 12th try and my Node tree looks like adam and eve's family tree, problem is stuck at FeatureMatching.. the nodes log last two lines say

SAVE GEOMETRIC MATCHES TASK DONE IN (s) 18798.281000

and there is a Red mark next to the 7 CHUNKS.. last time it was next to the 5 CHUNKS

The Alicevision command screen shows this as the last few lines

[08:31:58.210731][info] loadPairs: image pair (815588949, 2032055556) added. File: "C:/Users/Dellserver/Pictures/Meshroom_Projects/MeshroomCache/ImageMatching/1fba0a6b438939f186bced5b1d62a511b871440b/imageMatches.txt". [08:31:58.211731][info] Number of pairs: 663 [08:31:58.211731][info] Putative matches [08:31:58.211731][info] There are 404 views and 663 image pairs. Loading regions 0% 10 20 30 40 50 60 70 80 90 100% |----|----|----|----|----|----|----|----|----|----| **[08:32:13.951983][error] Invalid akaze regions files for the view 1081322909 :

natowi commented 5 years ago

Without the dataset there is only so much I can think of to try. I would load the image dataset in chunks, using augment reconstruction with ~50 images each (in order of capturing time). You could try my pre-release 2019.2 dev build as it includes some improvements which might be useful in your case.

ToddVG commented 5 years ago

I will try it.. I really like the feel and set-up of Meshroom, I want it to work for me so anything I can do to help..

Here is a link of my Reduced (25%) Dataset(Photos) maybe you can give it a test run???

https://1drv.ms/u/s!Aqxi3rm5OtGihIQySm8Up22qxUzpEg?e=L9snmY

natowi commented 5 years ago

OK, this looks like a capturing issue. Too many overexposed areas.

rc1

This would be good: re2

I would recommend you to start with an easier scene, like capturing this building rd3

You should set your camera in manual mode and adjust to the occasion. Also use a more structured flight pattern with a good overlap.

ToddVG commented 5 years ago

@natowi I am so greatful for your help so far and I hope help in the future but i am just not ready to give up on my dataset just yet. (haha)

I have gotten so close with parts of my project. so i think i am just missing a couple key settings. last night Tuesday and Tuesday night I ran a new project. Same 25% of my photos, but i broke up all the pictures into 4 groups and let it run.

this morning i got this. and below that is a jpg of my tree

9/41 FeatureMatching

9/41 FeatureMatching

ToddVG commented 5 years ago

Below is my Project file. I know I am getting to be a broken record, but really feel i am getting so close..

{ "header": { "pipelineVersion": "1.0", "releaseVersion": "2018.1.0", "fileVersion": "1.1", "nodesVersions": { "CameraConnection": "1.0", "FeatureMatching": "1.0", "StructureFromMotion": "1.0", "DepthMapFilter": "1.0", "MeshFiltering": "1.0", "FeatureExtraction": "1.0", "PrepareDenseScene": "1.0", "ImageMatching": "1.0", "DepthMap": "1.0", "Meshing": "1.0", "Texturing": "2.0", "CameraInit": "1.0" } }, "graph": { "CameraInit_1": { "nodeType": "CameraInit", "position": [ 0, 0 ], "parallelization": { "blockSize": 0, "size": 0, "split": 1 }, "uids": { "0": "2731003bd822aa3d98acd3185af7e6a65750fcd4" }, "internalFolder": "{cache}/{nodeType}/{uid0}/", "inputs": { "viewpoints": [], "intrinsics": [], "sensorDatabase": "C:\Users\Dellserver\Downloads\Meshroom-2018.1.0-win64\Meshroom-2018.1.0\aliceVision\share\aliceVision\cameraSensors.db", "defaultFieldOfView": 45.0, "verboseLevel": "info" }, "outputs": { "output": "{cache}/{nodeType}/{uid0}/cameraInit.sfm" } }, "FeatureExtraction_1": { "nodeType": "FeatureExtraction", "position": [ 155, 0 ], "parallelization": { "blockSize": 40, "size": 0, "split": 0 }, "uids": { "0": "95f3d1791424e0e94476dd6b11be7c79921582df" }, "internalFolder": "{cache}/{nodeType}/{uid0}/", "inputs": { "input": "{CameraInit_1.output}", "describerTypes": [ "sift" ], "describerPreset": "normal", "forceCpuExtraction": true, "verboseLevel": "info" }, "outputs": { "output": "{cache}/{nodeType}/{uid0}/" } }, "ImageMatching_1": { "nodeType": "ImageMatching", "position": [ 310, 0 ], "parallelization": { "blockSize": 0, "size": 0, "split": 1 }, "uids": { "0": "fa3159035f7dcd6163d017d1b01e308e13112cae" }, "internalFolder": "{cache}/{nodeType}/{uid0}/", "inputs": { "input": "{FeatureExtraction_1.input}", "featuresFolders": [ "{FeatureExtraction_1.output}" ], "tree": "C:\Users\Dellserver\Downloads\Meshroom-2018.1.0-win64\Meshroom-2018.1.0\aliceVision\share\aliceVision\vlfeat_K80L3.SIFT.tree", "weights": "", "minNbImages": 200, "maxDescriptors": 500, "nbMatches": 50, "verboseLevel": "info" }, "outputs": { "output": "{cache}/{nodeType}/{uid0}/imageMatches.txt" } }, "FeatureMatching_1": { "nodeType": "FeatureMatching", "position": [ 465, 0 ], "parallelization": { "blockSize": 20, "size": 0, "split": 0 }, "uids": { "0": "b3d521e0ccf09bbac7fd8f5067bc9660accd159e" }, "internalFolder": "{cache}/{nodeType}/{uid0}/", "inputs": { "input": "{ImageMatching_1.input}", "featuresFolders": "{ImageMatching_1.featuresFolders}", "imagePairsList": "{ImageMatching_1.output}", "describerTypes": [ "sift" ], "photometricMatchingMethod": "ANN_L2", "geometricEstimator": "acransac", "geometricFilterType": "fundamental_matrix", "distanceRatio": 0.8, "maxIteration": 2048, "maxMatches": 0, "savePutativeMatches": false, "guidedMatching": false, "exportDebugFiles": false, "verboseLevel": "info" }, "outputs": { "output": "{cache}/{nodeType}/{uid0}/" } }, "StructureFromMotion_1": { "nodeType": "StructureFromMotion", "position": [ 620, 0 ], "parallelization": { "blockSize": 0, "size": 0, "split": 1 }, "uids": { "0": "ab3008493d45a11557ae28cf65f17c8e24de32b8" }, "internalFolder": "{cache}/{nodeType}/{uid0}/", "inputs": { "input": "{FeatureMatching_1.input}", "featuresFolders": "{FeatureMatching_1.featuresFolders}", "matchesFolders": [ "{FeatureMatching_1.output}" ], "describerTypes": [ "sift" ], "localizerEstimator": "acransac", "lockScenePreviouslyReconstructed": false, "useLocalBA": true, "localBAGraphDistance": 1, "maxNumberOfMatches": 0, "minInputTrackLength": 2, "minNumberOfObservationsForTriangulation": 2, "minAngleForTriangulation": 3.0, "minAngleForLandmark": 2.0, "maxReprojectionError": 4.0, "minAngleInitialPair": 5.0, "maxAngleInitialPair": 40.0, "useOnlyMatchesFromInputFolder": false, "initialPairA": "", "initialPairB": "", "interFileExtension": ".abc", "verboseLevel": "info" }, "outputs": { "output": "{cache}/{nodeType}/{uid0}/sfm.abc", "outputViewsAndPoses": "{cache}/{nodeType}/{uid0}/cameras.sfm", "extraInfoFolder": "{cache}/{nodeType}/{uid0}/" } }, "PrepareDenseScene_1": { "nodeType": "PrepareDenseScene", "position": [ 775, 0 ], "parallelization": { "blockSize": 0, "size": 0, "split": 1 }, "uids": { "0": "229096035dfb2ca7f2ef5b75291ec6af5ed20a86" }, "internalFolder": "{cache}/{nodeType}/{uid0}/", "inputs": { "input": "{StructureFromMotion_1.output}", "verboseLevel": "info" }, "outputs": { "ini": "{cache}/{nodeType}/{uid0}/mvs.ini", "output": "{cache}/{nodeType}/{uid0}/" } }, "CameraConnection_1": { "nodeType": "CameraConnection", "position": [ 930, 0 ], "parallelization": { "blockSize": 0, "size": 0, "split": 1 }, "uids": { "0": "9bc873cf1ff92d75f36231ac42b65f560f1e463e" }, "internalFolder": "{cache}/{nodeType}/{uid0}/", "inputs": { "ini": "{PrepareDenseScene_1.ini}", "verboseLevel": "info" }, "outputs": {} }, "DepthMap_1": { "nodeType": "DepthMap", "position": [ 1085, 0 ], "parallelization": { "blockSize": 3, "size": 0, "split": 0 }, "uids": { "0": "db5cb3fcaddf43de168808ff0c3ca51731fbcbd8" }, "internalFolder": "{cache}/{nodeType}/{uid0}/", "inputs": { "ini": "{CameraConnection_1.ini}", "downscale": 2, "sgmMaxTCams": 10, "sgmWSH": 4, "sgmGammaC": 5.5, "sgmGammaP": 8.0, "refineNSamplesHalf": 150, "refineNDepthsToRefine": 31, "refineNiters": 100, "refineWSH": 3, "refineMaxTCams": 6, "refineSigma": 15, "refineGammaC": 15.5, "refineGammaP": 8.0, "refineUseTcOrRcPixSize": false, "verboseLevel": "info" }, "outputs": { "output": "{cache}/{nodeType}/{uid0}/" } }, "DepthMapFilter_1": { "nodeType": "DepthMapFilter", "position": [ 1240, 0 ], "parallelization": { "blockSize": 10, "size": 0, "split": 0 }, "uids": { "0": "d48e458fa8d3c8d1013276f029f41fabc1c5156c" }, "internalFolder": "{cache}/{nodeType}/{uid0}/", "inputs": { "ini": "{DepthMap_1.ini}", "depthMapFolder": "{DepthMap_1.output}", "nNearestCams": 10, "minNumOfConsistensCams": 3, "minNumOfConsistensCamsWithLowSimilarity": 4, "pixSizeBall": 0, "pixSizeBallWithLowSimilarity": 0, "verboseLevel": "info" }, "outputs": { "output": "{cache}/{nodeType}/{uid0}/" } }, "Meshing_1": { "nodeType": "Meshing", "position": [ 1395, 0 ], "parallelization": { "blockSize": 0, "size": 1, "split": 1 }, "uids": { "0": "480bc157aa8d3a049d14bfc880620f4577e800c5" }, "internalFolder": "{cache}/{nodeType}/{uid0}/", "inputs": { "ini": "{DepthMapFilter_1.ini}", "depthMapFolder": "{DepthMapFilter_1.depthMapFolder}", "depthMapFilterFolder": "{DepthMapFilter_1.output}", "maxInputPoints": 50000000, "maxPoints": 5000000, "maxPointsPerVoxel": 1000000, "minStep": 2, "partitioning": "singleBlock", "repartition": "multiResolution", "angleFactor": 15.0, "simFactor": 15.0, "pixSizeMarginInitCoef": 2.0, "pixSizeMarginFinalCoef": 4.0, "voteMarginFactor": 4.0, "contributeMarginFactor": 2.0, "simGaussianSizeInit": 10.0, "simGaussianSize": 10.0, "minAngleThreshold": 1.0, "refineFuse": true, "verboseLevel": "info" }, "outputs": { "output": "{cache}/{nodeType}/{uid0}/mesh.obj", "outputDenseReconstruction": "{cache}/{nodeType}/{uid0}/denseReconstruction.bin" } }, "MeshFiltering_1": { "nodeType": "MeshFiltering", "position": [ 1550, 0 ], "parallelization": { "blockSize": 0, "size": 1, "split": 1 }, "uids": { "0": "0a9c7de948964cff614787976ba263106cb04404" }, "internalFolder": "{cache}/{nodeType}/{uid0}/", "inputs": { "input": "{Meshing_1.output}", "removeLargeTrianglesFactor": 60.0, "keepLargestMeshOnly": true, "iterations": 5, "lambda": 1.0, "verboseLevel": "info" }, "outputs": { "output": "{cache}/{nodeType}/{uid0}/mesh.obj" } }, "Texturing_1": { "nodeType": "Texturing", "position": [ 1705, 0 ], "parallelization": { "blockSize": 0, "size": 1, "split": 1 }, "uids": { "0": "8ba091ea1d65493d1390b085d84f4ea660d2cf5a" }, "internalFolder": "{cache}/{nodeType}/{uid0}/", "inputs": { "ini": "{Meshing_1.ini}", "inputDenseReconstruction": "{Meshing_1.outputDenseReconstruction}", "inputMesh": "{MeshFiltering1.output}", "textureSide": 8192, "downscale": 2, "outputTextureFileType": "png", "unwrapMethod": "Basic", "fillHoles": false, "padding": 15, "maxNbImagesForFusion": 3, "bestScoreThreshold": 0.0, "angleHardThreshold": 90.0, "forceVisibleByAllVertices": false, "flipNormals": false, "visibilityRemappingMethod": "PullPush", "verboseLevel": "info" }, "outputs": { "output": "{cache}/{nodeType}/{uid0}/", "outputMesh": "{cache}/{nodeType}/{uid0}/texturedMesh.obj", "outputMaterial": "{cache}/{nodeType}/{uid0}/texturedMesh.mtl", "outputTextures": "{cache}/{nodeType}/{uid0}/texture*.png" } } } }

natowi commented 5 years ago

I get a good result adding images in 10 images each chunks using augment reconstruction and default pipeline. test4

The QVariant(Invalid) Please check your QParameters error might be related to the 2018 version see #258 #223