alicevision / AliceVision

Photogrammetric Computer Vision Framework
http://alicevision.org
Other
3.01k stars 827 forks source link

Failed to estimate space from SfM: The space bounding box is too small. #869

Closed cryptopapi997 closed 4 years ago

cryptopapi997 commented 4 years ago

Hey guys, Thank you so much for building this great library! I'm playing around with it and trying to run the pipeline over the command line, but keep getting this error and unfortunately am not sure what exactly it means. Is there something wrong with the way I am taking pictures?

Below you can find the full error in case that's helpful, but as I said, I don't want you guys to fix my errors for me obviously, just getting a quick pointer in the right direction reagrding the error message would be helpful. Thanks!

stdout: Program called with the following parameters:
 * allowSingleView = 1
 * defaultCameraModel = "" (default)

stdout:  * defaultFieldOfView = 45
 * defaultFocalLengthPix = -1 (default)
 * defaultIntrinsic = "" (default)
 * groupCameraFallback =  Unknown Type "20EGroupCameraFallback"
 * imageFolder = "" (default)
 * input = "/tmp/tmpw12az41v/CameraInit/c448939571d5c70b05c9ae4ad416ada37d6273ca//viewpoints.sfm"
 * output = "/tmp/tmpw12az41v/CameraInit/c448939571d5c70b05c9ae4ad416ada37d6273ca/cameraInit.sfm"
 * sensorDatabase = ""
 * verboseLevel = "info"

stderr: [15:30:37.239990][warning] Image '1.png' focal length (in mm) metadata is missing.
Can't compute focal length (px), use default.

[Cut out 49x the same error as the one above reagrding the focal length for my remaining 49 pictures for better readability here] 

stderr: [15:30:37.244102][warning] Some image(s) have no serial number to identify the camera/lens device.
This makes it impossible to correctly group the images by device if you have used multiple identical (same model) camera devices.
The reconstruction will assume that only one device has been used, so if 2 images share the same focal length approximation they will share the same internal camera parameters.
50 image(s) are concerned.
stderr: 

stderr: [15:30:37.245909][info] CameraInit report:
    - # views listed: 50
       - # views with an initialized intrinsic listed: 50
       - # views without metadata (with a default intrinsic): 50
    - # intrinsics listed: 1
stderr: 

stderr: ERROR:root:Error on node computation: Error on node "Meshing_1":
Log:
Program called with the following parameters:
 * addLandmarksToTheDensePointCloud = 0
 * angleFactor = 15
 * colorizeOutput = 0
 * contributeMarginFactor = 2
 * depthMapsFilterFolder = "/tmp/MeshroomCache/DepthMapFilter/37d194520ff3a1577085777f48649f13fdf59ff2"
 * depthMapsFolder = "/tmp/MeshroomCache/DepthMap/8e5d198cdbd8e35dd7e6df9e1177c797e564b977"
 * estimateSpaceFromSfM = 1
 * estimateSpaceMinObservationAngle = 10
 * estimateSpaceMinObservations = 3
 * input = "/tmp/MeshroomCache/StructureFromMotion/51313d13835a41a92741afd6bb315a2d56a52df1/sfm.abc"
 * maxInputPoints = 50000000
 * maxPoints = 5000000
 * maxPointsPerVoxel = 1000000
 * minAngleThreshold = 1
 * minStep = 2
 * output = "/tmp/MeshroomCache/Meshing/796b8bd3480ca4ca9ab062eacbca5ac9bbe49295/densePointCloud.abc"
 * outputMesh = "/tmp/MeshroomCache/Meshing/796b8bd3480ca4ca9ab062eacbca5ac9bbe49295/mesh.obj"
 * partitioning =  Unknown Type "17EPartitioningMode"
 * pixSizeMarginFinalCoef = 4
 * pixSizeMarginInitCoef = 2
 * refineFuse = 1
 * repartition =  Unknown Type "16ERepartitionMode"
 * saveRawDensePointCloud = 0
 * simFactor = 15
 * simGaussianSize = 10
 * simGaussianSizeInit = 10
 * universePercentile = 0.999 (default)
 * verboseLevel = "info"
 * voteMarginFactor = 4

[15:30:37.827326][info] Found 1 image dimension(s): 
[15:30:37.827376][info]     - [480x640]
[15:30:37.829139][info] Overall maximum dimension: [480x640]
[15:30:37.829168][warning] repartitionMode: 1
[15:30:37.829175][warning] partitioningMode: 1
[15:30:37.829179][info] Meshing mode: multi-resolution, partitioning: single block.
[15:30:37.829185][info] Estimate space from SfM.
terminate called after throwing an instance of 'std::runtime_error'
  what():  Failed to estimate space from SfM: The space bounding box is too small.
Aborted (core dumped)

stderr: Traceback (most recent call last):
  File "/opt/rh/rh-python36/root/usr/lib64/python3.6/site-packages/cx_Freeze/initscripts/__startup__.py", line 14, in run
  File "/opt/Meshroom/setupInitScriptUnix.py", line 39, in run
  File "bin/meshroom_photogrammetry", line 144, in <module>
  File "/opt/Meshroom/meshroom/core/graph.py", line 1131, in executeGraph
  File "/opt/Meshroom/meshroom/core/node.py", line 274, in process
  File "/opt/Meshroom/meshroom/core/desc.py", line 453, in processChunk
RuntimeError: Error on node "Meshing_1":
Log:
Program called with the following parameters:
 * addLandmarksToTheDensePointCloud = 0
 * angleFactor = 15
 * colorizeOutput = 0
 * contributeMarginFactor = 2
 * depthMapsFilterFolder = "/tmp/MeshroomCache/DepthMapFilter/37d194520ff3a1577085777f48649f13fdf59ff2"
 * depthMapsFolder = "/tmp/MeshroomCache/DepthMap/8e5d198cdbd8e35dd7e6df9e1177c797e564b977"
 * estimateSpaceFromSfM = 1
 * estimateSpaceMinObservationAngle = 10
 * estimateSpaceMinObservations = 3
 * input = "/tmp/MeshroomCache/StructureFromMotion/51313d13835a41a92741afd6bb315a2d56a52df1/sfm.abc"
 * maxInputPoints = 50000000
 * maxPoints = 5000000
 * maxPointsPerVoxel = 1000000
 * minAngleThreshold = 1
 * minStep = 2
 * output = "/tmp/MeshroomCache/Meshing/796b8bd3480ca4ca9ab062eacbca5ac9bbe49295/densePointCloud.abc"
 * outputMesh = "/tmp/MeshroomCache/Meshing/796b8bd3480ca4ca9ab062eacbca5ac9bbe49295/mesh.obj"
 * partitioning =  Unknown Type "17EPartitioningMode"
 * pixSizeMarginFinalCoef = 4
 * pixSizeMarginInitCoef = 2
 * refineFuse = 1
 * repartition =  Unknown Type "16ERepartitionMode"
 * saveRawDensePointCloud = 0
 * simFactor = 15
 * simGaussianSize = 10
 * simGaussianSizeInit = 10
 * universePercentile = 0.999 (default)
 * verboseLevel = "info"
 * voteMarginFactor = 4

[15:30:37.827326][info] Found 1 image dimension(s): 
[15:30:37.827376][info]     - [480x640]
[15:30:37.829139][info] Overall maximum dimension: [480x640]
[15:30:37.829168][warning] repartitionMode: 1
[15:30:37.829175][warning] partitioningMode: 1
[15:30:37.829179][info] Meshing mode: multi-resolution, partitioning: single block.
[15:30:37.829185][info] Estimate space from SfM.
terminate called after throwing an instance of 'std::runtime_error'
  what():  Failed to estimate space from SfM: The space bounding box is too small.
Aborted (core dumped)

stdout: Plugins loaded:  CameraCalibration, CameraInit, CameraLocalization, CameraRigCalibration, CameraRigLocalization, ConvertSfMFormat, DepthMap, DepthMapFilter, ExportAnimatedCamera, ExportColoredPointCloud, ExportMaya, FeatureExtraction, FeatureMatching, ImageMatching, ImageMatchingMultiSfM, KeyframeSelection, LDRToHDR, MeshDecimate, MeshDenoising, MeshFiltering, MeshResampling, Meshing, PrepareDenseScene, Publish, SfMAlignment, SfMTransform, StructureFromMotion, Texturing
Nodes to execute:  ['Meshing_1', 'MeshFiltering_1', 'Texturing_1', 'Publish_1']
WARNING: downgrade status on node "Meshing_1" from Status.ERROR to Status.SUBMITTED

[1/4] Meshing
 - commandLine: aliceVision_meshing  --input "/tmp/MeshroomCache/StructureFromMotion/51313d13835a41a92741afd6bb315a2d56a52df1/sfm.abc" --depthMapsFolder "/tmp/MeshroomCache/DepthMap/8e5d198cdbd8e35dd7e6df9e1177c797e564b977" --depthMapsFilterFolder "/tmp/MeshroomCache/DepthMapFilter/37d194520ff3a1577085777f48649f13fdf59ff2" --estimateSpaceFromSfM True --estimateSpaceMinObservations 3 --estimateSpaceMinObservationAngle 10 --maxInputPoints 50000000 --maxPoints 5000000 --maxPointsPerVoxel 1000000 --minStep 2 --partitioning singleBlock --repartition multiResolution --angleFactor 15.0 --simFactor 15.0 --pixSizeMarginInitCoef 2.0 --pixSizeMarginFinalCoef 4.0 --voteMarginFactor 4.0 --contributeMarginFactor 2.0 --simGaussianSizeInit 10.0 --simGaussianSize 10.0 --minAngleThreshold 1.0 --refineFuse True --addLandmarksToTheDensePointCloud False --colorizeOutput False --saveRawDensePointCloud False --verboseLevel info --outputMesh "/tmp/MeshroomCache/Meshing/796b8bd3480ca4ca9ab062eacbca5ac9bbe49295/mesh.obj" --output "/tmp/MeshroomCache/Meshing/796b8bd3480ca4ca9ab062eacbca5ac9bbe49295/densePointCloud.abc" 
 - logFile: /tmp/MeshroomCache/Meshing/796b8bd3480ca4ca9ab062eacbca5ac9bbe49295/log
 - elapsed time: 0:00:00.116275
WARNING: downgrade status on node "MeshFiltering_1" from Status.SUBMITTED to Status.NONE
WARNING: downgrade status on node "Texturing_1" from Status.SUBMITTED to Status.NONE
WARNING: downgrade status on node "Publish_1" from Status.SUBMITTED to Status.NONE

child process exited with code 1
cryptopapi997 commented 4 years ago

Update: It turns out this error was caused by me updating my computer somehow uninstalling CUDA.

louis1lal commented 3 years ago

I get this error too. I just tested meshroom with the monster photos from their test section. Then I tried with my own photos and I get this error:

Failed to estimate space from SfM: The space bounding box is too small.

Just bought a new computer with a nvidia rtx 2060 and 1to of ssd and 32 gig of ram.

[10:43:41.357583][info] Found 1 image dimension(s): [10:43:41.357583][info] - [4032x1816] [10:43:41.360574][info] Overall maximum dimension: [4032x1816] [10:43:41.360574][warning] repartitionMode: 1 [10:43:41.360574][warning] partitioningMode: 1 [10:43:41.360574][info] Meshing mode: multi-resolution, partitioning: single block. [10:43:41.360574][info] Estimate space from SfM. [10:43:41.361572][fatal] Failed to estimate space from SfM: The space bounding box is too small.

WARNING:root:Downgrade status on node "MeshFiltering_1" from Status.SUBMITTED to Status.NONE WARNING:root:Downgrade status on node "Texturing_1" from Status.SUBMITTED to Status.NONE

Is there any solution? Thanks

FrankDD81 commented 3 years ago

this happens only to me if there are only 2 cameras left. have you tried another imageset? you can take some pictures of a chair and try.

its not about cuda. if there is no cuda card it tells you on meshing that there is no cuda found.

works great with my gtx 1070 and 16gb ram

juniorrrrrrr commented 3 years ago

Hi, I have this same problem in Meshing: "Failed to estimate space from SfM: The space bounding box is too small" Searching around here I already tried to disable the "Estimate Space From Sfm" but it didn't work, I tried again with other photos, without success either. How can I resolve this gap? I use version 2021.01 Is my machine too weak? 650 Ti Core 2 Quad Q8200

LloydTSmith commented 2 years ago

I am seeing the same error...

Failed to estimate space from SfM: The space bounding box is too small.

... for the first time with an image set in which I placed the object in a photography tent to eliminate unwanted background. With images taken in open space with clutter in the background, the same Meshroom batch works fine. I have not yet tried to tweak any parameters, but I suspect that the problem is that my default settings specify some lower bound for the bounding box dimensions, and the size of my object is too small. Not sure where to go to find that parameter, but I will look there next. If anyone has insight, it would be appreciated.

fabiencastan commented 2 years ago

Usually it could mean that the output of the StructureFromMotion node (SfM) is too bad. Can you check the point cloud from the SfM node?

LloydTSmith commented 2 years ago

OK, thanks. I see there are reports in the structure from motion folder, but I am not sure how to interpret these results: [log] Sequential SfM reconstruction Dataset info:Views count: 30 Essential Matrix. Robust Essential matrix: -> View I: id: 1657188973 image path: C:/MR_Inputs/LTS_02_32/PXL_20220616_180530874.jpg -> View J: id: 1711156304 image path: C:/MR_Inputs/LTS_02_32/PXL_20220616_180521958.jpg

LloydTSmith commented 2 years ago

I have tried photographing the object inside the tent several times and every time reconstruction fails at the same place. As I recall, I have had similar problems in the past when an object is rotated on a turntable rather than the camera moving around the object. I took double the number of images of the same object and added them to the same folder, and the reconstruction finished but the result was not at all representative of the actual object. It seems that SFM does not have enough information to reconstruct the scene without background objects in the scene. Maybe this is something pros know, but it is not obvious to a new user. I wouldn't mind including the background if there were an easy way to apply a bounding box to the result to extract the subject without having to do it in a 3rd party editor. I have tried that and found it not to be as easy as it should be.

LloydTSmith commented 2 years ago

My target object is about 20 x 10 x 10 cm (HxWxD). I created a slightly larger environment with an irregularly colored flat surface about 60 x 60 cm beneath the stationary object, and moved the camera around the object this time. All of the rest of the visible background was obscured with white backdrop. In this scenario, the reconstruction succeeds (with a few flaws where I was less successful in positioning the camera). The photos in the initial "turntable" scenario were of much higher quality, and there were fare more of them from more angles, but there were no cues as to camera position except in the appearance of the object.

Curious to know if the issue is a) the small size of the object when sitting on a solid white and featureless surface/backdrop, versus b) the lack of foreground/background context for the algorithm to use in working its magic. I'd like to find a compromise -- a simple surface for the object that has just enough detail to ensure successful reconstruction without spending a lot of time hand editing the reconstructed scene to remove unwanted background.

I know this might be off-topic for the software forum, but the default pipeline (and maybe the algorithms) require / expect certain properties in the input images and the photographic scenario, so maybe it is an appropriate sidebar to the discussion about the error "Failed to estimate space from SfM: The space bounding box is too small. .