alicevision / Meshroom

3D Reconstruction Software
http://alicevision.org
Other
11.13k stars 1.08k forks source link

Select part of pointcloud for meshing #217

Closed pr0gr8mm3r closed 5 years ago

pr0gr8mm3r commented 6 years ago

I'm wondering if there is a way to only use part of the generated pointcloud for meshing. I had to reduce the number of maximum points used for meshing to 2000000 points, as I don't have enough RAM for more (#195). The problem is that my pointcloud includes a lot of the room I scanned my object in. As the object has a higher density than the surroundings, it's quality is reduced a lot. Solutions would be to delete part of the cloud or prioritize parts that have a high density. Does anyone know how to do this? Thanks in advance.

fabiencastan commented 6 years ago

Yes that's of course an important feature, but it is not yet implemented.

renanmgs commented 5 years ago

Any updates on this? this is really important, is like a vital update

fabiencastan commented 5 years ago

You should try the new release: https://github.com/alicevision/meshroom/releases/tag/v2019.1.0 You cannot directly edit the bounding box, but you can adjust the parameter Min Observations Angle For SfM Space Estimation on the Meshing node which should work fine for your use case.

Khojanator commented 4 years ago

@fabiencastan I'm currently testing Meshroom, and my experience with the software has been really good! I'm interested in this feature (bounding box / reconstruction region) as well and would love to know if there's some way I can help develop it.

natowi commented 4 years ago

@Khojanator Image Masking could be used to filter features before reconstruction https://github.com/alicevision/meshroom/pull/708 but a generic background removal tool for bulk mask generation https://github.com/alicevision/meshroom/issues/713 is still missing.

Of course being able to select a part of the sfm pointcloud for reconstruction could still be useful in some cases.

fabiencastan commented 4 years ago

@Khojanator Yes, of course. We can setup a confcall to discuss how to implement it.

Khojanator commented 4 years ago

@natowi @fabiencastan Thanks for getting back to me. For some reason, I didn't get notified even though there was an at-mention... Anyways, Image masking isn't the best option for me, since I'm trying to do a full-body scan using a rig, something similar to this [https://web.twindom.com/twinstant-mobile-full-body-3d-scanner/]. So, a person stands in the center, and multiple images are taken from each direction. What works really well here is having distinct features in the background, which leads to better SfM results, and I feel that image masking would lead to a worse result. That being said, I'm also a novice in this area, so please correct me here. That being said, once I can consistently get the point clouds of the scans produced in the same location/orientation, having a bounding box/reconstruction region will allow me to consistently get the region for MVS where the person is. Thoughts? Let's find a time to chat further and setup a confcall!

fabiencastan commented 4 years ago

With the image masking you can still decide to use all the feature points without masking for the SfM and then only use the mask in the depth maps. You can contact me at fabien.castan[at]mikrosimage.eu to setup a call in January.

NexTechAR-Scott commented 4 years ago

I agree bounding box is huge need.

It can drastically reduce processing time and can eliminate the need for cleaning the resulting mesh.

For me the two best tools are Reality Captured and Meshroom.

RC is stupid fast but lacks point cloud editing capabilities which is somewhat mitigated by having bounding box control.

Meshroom is full of tweak options but the glaring miss for me is point cloud editing and bounding box.

Either one (or both) would make Meshroom the most robust CLI solution out there.

I’ve been chasing an interupt in the pipe to bring SFM.abc output into a 3rd party solution like blender to clean up the point cloud then bring that back to Meshroom for autocomplete in my pipe.

The blocker is Meshroom won’t process that edited alembic, does not throw an error, just won’t process it.

I can only assume it’s something about the Blender alembic that is not structured the way Meshroom needs it to be.

If anyone has some advice on the abc format that Meshroom expects I’d be grateful to hear it.

fabiencastan commented 4 years ago

I can only assume it’s something about the Blender alembic that is not structured the way Meshroom needs it to be.

It is not possible because we maintain visibility information in the ABC file (which is a notion specific to photogrammetry). It would be possible to create a node to re-import an externally modified point cloud and remap the 3d points visibility into it (as we do with meshes, when we allow to retexture an externally modified mesh).

Khojanator commented 4 years ago

@fabiencastan sorry for the message here. I tried reaching out to you over fabien.castan[at]mikrosimage.eu, but got an address not found error. Is there a better way for us to get in touch? Feel free to send me an email at ahsan.khoja[at]gmail.com. I'd love to get this project going! Thanks!