Closed cryptopapi997 closed 4 years ago
Update: It turns out this error was caused by me updating my computer somehow uninstalling CUDA.
I get this error too. I just tested meshroom with the monster photos from their test section. Then I tried with my own photos and I get this error:
Failed to estimate space from SfM: The space bounding box is too small.
Just bought a new computer with a nvidia rtx 2060 and 1to of ssd and 32 gig of ram.
[10:43:41.357583][info] Found 1 image dimension(s): [10:43:41.357583][info] - [4032x1816] [10:43:41.360574][info] Overall maximum dimension: [4032x1816] [10:43:41.360574][warning] repartitionMode: 1 [10:43:41.360574][warning] partitioningMode: 1 [10:43:41.360574][info] Meshing mode: multi-resolution, partitioning: single block. [10:43:41.360574][info] Estimate space from SfM. [10:43:41.361572][fatal] Failed to estimate space from SfM: The space bounding box is too small.
WARNING:root:Downgrade status on node "MeshFiltering_1" from Status.SUBMITTED to Status.NONE WARNING:root:Downgrade status on node "Texturing_1" from Status.SUBMITTED to Status.NONE
Is there any solution? Thanks
this happens only to me if there are only 2 cameras left. have you tried another imageset? you can take some pictures of a chair and try.
its not about cuda. if there is no cuda card it tells you on meshing that there is no cuda found.
works great with my gtx 1070 and 16gb ram
Hi, I have this same problem in Meshing: "Failed to estimate space from SfM: The space bounding box is too small" Searching around here I already tried to disable the "Estimate Space From Sfm" but it didn't work, I tried again with other photos, without success either. How can I resolve this gap? I use version 2021.01 Is my machine too weak? 650 Ti Core 2 Quad Q8200
I am seeing the same error...
Failed to estimate space from SfM: The space bounding box is too small.
... for the first time with an image set in which I placed the object in a photography tent to eliminate unwanted background. With images taken in open space with clutter in the background, the same Meshroom batch works fine. I have not yet tried to tweak any parameters, but I suspect that the problem is that my default settings specify some lower bound for the bounding box dimensions, and the size of my object is too small. Not sure where to go to find that parameter, but I will look there next. If anyone has insight, it would be appreciated.
Usually it could mean that the output of the StructureFromMotion node (SfM) is too bad. Can you check the point cloud from the SfM node?
OK, thanks. I see there are reports in the structure from motion folder, but I am not sure how to interpret these results: [log] Sequential SfM reconstruction Dataset info:Views count: 30 Essential Matrix. Robust Essential matrix: -> View I: id: 1657188973 image path: C:/MR_Inputs/LTS_02_32/PXL_20220616_180530874.jpg -> View J: id: 1711156304 image path: C:/MR_Inputs/LTS_02_32/PXL_20220616_180521958.jpg
I have tried photographing the object inside the tent several times and every time reconstruction fails at the same place. As I recall, I have had similar problems in the past when an object is rotated on a turntable rather than the camera moving around the object. I took double the number of images of the same object and added them to the same folder, and the reconstruction finished but the result was not at all representative of the actual object. It seems that SFM does not have enough information to reconstruct the scene without background objects in the scene. Maybe this is something pros know, but it is not obvious to a new user. I wouldn't mind including the background if there were an easy way to apply a bounding box to the result to extract the subject without having to do it in a 3rd party editor. I have tried that and found it not to be as easy as it should be.
My target object is about 20 x 10 x 10 cm (HxWxD). I created a slightly larger environment with an irregularly colored flat surface about 60 x 60 cm beneath the stationary object, and moved the camera around the object this time. All of the rest of the visible background was obscured with white backdrop. In this scenario, the reconstruction succeeds (with a few flaws where I was less successful in positioning the camera). The photos in the initial "turntable" scenario were of much higher quality, and there were fare more of them from more angles, but there were no cues as to camera position except in the appearance of the object.
Curious to know if the issue is a) the small size of the object when sitting on a solid white and featureless surface/backdrop, versus b) the lack of foreground/background context for the algorithm to use in working its magic. I'd like to find a compromise -- a simple surface for the object that has just enough detail to ensure successful reconstruction without spending a lot of time hand editing the reconstructed scene to remove unwanted background.
I know this might be off-topic for the software forum, but the default pipeline (and maybe the algorithms) require / expect certain properties in the input images and the photographic scenario, so maybe it is an appropriate sidebar to the discussion about the error "Failed to estimate space from SfM: The space bounding box is too small. .
Hey guys, Thank you so much for building this great library! I'm playing around with it and trying to run the pipeline over the command line, but keep getting this error and unfortunately am not sure what exactly it means. Is there something wrong with the way I am taking pictures?
Below you can find the full error in case that's helpful, but as I said, I don't want you guys to fix my errors for me obviously, just getting a quick pointer in the right direction reagrding the error message would be helpful. Thanks!