Open fromagge opened 2 years ago
Hi @tamnesiac, thanks for trying MeshroomCL. It looks like the default "Photogrammetry (OpenCL)" pipeline is completely failing to align your images together (only 3 out of 914!). One reason for this might be that your camera sensor dimensions are being estimated rather than specified from metadata, since I see the yellow icon in the upper-left corner of each of photo.
As you know, an alternative to the default MeshroomCL pipeline is to select the "Photogrammetry OpenCL (MVS)" pipeline, which will use the original Meshroom nodes for image alignment. I'd recommend trying this, and then adjust the "Max Image Size" attribute in the MultiviewStereoCL node to something relatively small. If the MaxImageSize is set to 1000 pixels or less, I think the computation should complete in a reasonable amount of time, and will still be a big improvement over the "draft" meshing of the sparse cloud that you were doing in Meshroom.
We also distribute software called COLMAP-CL (https://github.com/openphotogrammetry/colmap-cl) that allows you to compute depth maps in an incremental manner, so that even if it took 40 hours, you could start and stop the process whenever you had spare time on your computer to make progress one chunk at a time. COLMAP-CL does not do texturing, but you could import the mesh back into MeshroomCL to texture it.
If you are able to make a subset of your images available to us, we can investigate why your images are failing to align with the default MeshroomCL pipeline. Please let us know what happens if you experiment with a smaller MaxImageSize setting.
Hey! Thanks for your soon reply.
One reason for this might be that your camera sensor dimensions are being estimated rather than specified from metadata, since I see the yellow icon in the upper-left corner of each of photo.
You are correct, the metadata doesn't contain sensor dimension as the photo were taken with a phone.
I'd recommend trying this, and then adjust the "Max Image Size" attribute in the MultiviewStereoCL node to something relatively small. If the MaxImageSize is set to 1000 pixels or les...
Currently I'm doing a OpenCL (MVS) pipeline and I'll let you know on the results.
If you are able to make a subset of your images available to us, we can investigate why your images are failing to align with the default MeshroomCL pipeline. Please let us know what happens if you experiment with a smaller MaxImageSize setting.
I can provide you with that data no problem. Email me at ygabo@pm.me and I'll send you the link!
I've been using Meshroom for a while and most of the time the results I get are satisfactory for what I need to accomplish. However I found out about Meshroom CL just about a week ago and decided to give it a try. With the same sample data and the default presets for both apps I'm getting very different results.
Specs:
- Windows 10
- Ryzen 3600
- RX 5700
- 32GB RAM
Meshroom (non-CL) using only CPU:
MeshroomCL using CPU & GPU:
Photogrammetry (OpenCL) pipeline
I wanted to try with the mixed pipeline but I'd take almost 40+ hours to complete to my estimates and right now I'm not able to do it.
I don't if this is the expected behavior or is just a misconfiguration from my side, although the only thing I did is to not use a DepthMap because I don't a have a CUDA enabled GPU.
Any insight on this will be highly appreciated.
you also don't have the texturingCL view selected
you also don't have the texturingCL view selected
Didn't bother on OpenCL due to the lack of data in structure from motion, same applies to the other pipeline.
you also don't have the texturingCL view selected
Didn't bother on OpenCL due to the lack of data in structure from motion, same applies to the other pipeline.
ah I see
I've been using Meshroom for a while and most of the time the results I get are satisfactory for what I need to accomplish. However I found out about Meshroom CL just about a week ago and decided to give it a try. With the same sample data and the default presets for both apps I'm getting very different results.
Specs:
Meshroom (non-CL) using only CPU:
MeshroomCL using CPU & GPU:
Photogrammetry (OpenCL) pipeline
I wanted to try with the mixed pipeline but I'd take almost 40+ hours to complete to my estimates and right now I'm not able to do it.
I don't if this is the expected behavior or is just a misconfiguration from my side, although the only thing I did is to not use a DepthMap because I don't a have a CUDA enabled GPU.
Any insight on this will be highly appreciated.