alicevision / Meshroom

3D Reconstruction Software
http://alicevision.org
Other
11.21k stars 1.09k forks source link

Texturing with an alternative set of photos #763

Closed tpieco closed 4 years ago

tpieco commented 4 years ago

Is there a way to generate a mesh from one set of photos but then texture with another set? I've tried a number of things, like unplugging the imagesFolder connector from the texturing node and pointing to the directory of the new photos, and swapping the generated images in the PrepareDenseScene folder with the alternative photos but it causes the process to fail.

Is it possible at all now or will be in the future?

Thanks

natowi commented 4 years ago

Use a structured light dataset for example, then set the image dataset without structured light in the PrepareDenseScene node (Images Folders (+) ). Use the same image names. Sample datasets

(correction: duplicate the PrepareDenseScene node and connect it to Texturing input)

im1

tpieco commented 4 years ago

Thanks for the reply. The process completed but with mesh errors, seen in the image below. I might be completely wrong in thinking it would work, but I was with trying to make more accurate roughness maps by taking polarised and unpolarized photos of an object, using the texture generated from the unpolarized photos for the roughness map.

additionalphotos

natowi commented 4 years ago

So the "without" sample is computed from polarized images?

tpieco commented 4 years ago

Yes. The object on the right (without) is the result using polarized images. The object on the left uses the polarized images but with the unpolarized images added as Image Folders in the PrepareDenseScene node. Hope that makes sense.

natowi commented 4 years ago

I think in your case it is best to use the polarized images only. The results are good and there is no reason to add the normal images for texturing, as they may contain reflections we want to avoid.

tpieco commented 4 years ago

I can't fault the results I'm getting with polarized images, they're brilliant. It's just that creating roughness maps is a frustration to me and I'm trying any tricks I can think of to make them more simply and accurate.

Thanks.

natowi commented 4 years ago

Projecting a pattern on the surface and use it for the reconstruction could improve the quality of the surface reconstruction. But this can be difficult to set up.

You can also try out adding powder or paint on your model to highlight more features (before, capture an image set for later texturing).

If you need only a part of the model in high resolution for later use in a 3d modelling tool, you can generate a texture using Reflectance Transformation Imaging (RTI) and export the normals map.

hargrovecompany commented 4 years ago

I am needing to use the capability to swap from using projected pattern lighting images for mesh to normal lighting for texture. I can't find the image that shows the entire workflow. I found one in your wiki that has an image, but it's cut off on the left, doesn't show initiation of that part of the workflow. I believe there was another issue or something here at one time that had an image of the full workflow...can anyone help? Thanks!

natowi commented 4 years ago

see https://github.com/alicevision/meshroom/wiki/Projected-Light-Patterns

hargrovecompany commented 4 years ago

Natowi, I finally found the post that I was looking for. https://groups.google.com/forum/#!topic/alicevision/Y1mde4F1KmU

I'm a bit confused....do I need to actually modify the workflow as in the link, or simply add the file location as in the link that you provided above?

Also, I have some really good sample image sets now. My rig is giving me great results. If anyone here might need a set of images for testing, let me know.

I've haven't tuned in here in quite some time.....I noticed something today about software to support a rig...I have a pretty good system, written in python. It supports pi cameras and a bunch of DSLR cameras, and it has the ability to mix them. It has color correction (post capture processing) capabilities. Pretty nifty stuff. If you guys need any of it let me know.

natowi commented 4 years ago

@hargrovecompany oh yes, thank you, I forgot, sorry for the mixup. @tpieco you could try this image I corrected it in the wiki. (without the new node, the images for texturing will also be used for depth map generation)

Also, I have some really good sample image sets now. My rig is giving me great results. If anyone here might need a set of images for testing, let me know.

Having a camera rig sample dataset for Meshroom under a creative commons license would be nice, so it could be used for testing and for tutorials similar to the monstree dataset.

I noticed something today about software to support a rig

Yes, there is some basic rig support. https://github.com/alicevision/meshroom/wiki/Multi-Camera-Rig

I have a pretty good system, written in python. It supports pi cameras and a bunch of DSLR cameras, and it has the ability to mix them. It has color correction (post capture processing) capabilities. Pretty nifty stuff. If you guys need any of it let me know.

Yes, we talked about this in https://github.com/alicevision/meshroom/issues/480. Your contribution is welcome.

tpieco commented 4 years ago

@natowi Fantastic! That does exactly what I was looking to do. @hargrovecompany Thanks for the link. I wasn't aware of that forum before.

hargrovecompany commented 4 years ago

I just ran a projected/normal sequence adding only the NORMAL images location under the Prepare Dense Seen node I just edited this....I entered bad info accidentally earlier. Simply adding the normal image folder location under Prepare Dense Cloud folder location does not result in texture from the normal lighting images.
I will run it again and try to map the node connectors similar to the pic showing the mapping above. I did notice that the node connectors options are different on the latest version of the software....

hargrovecompany commented 4 years ago

Here is a link to one of my sample sets....projected and normal lighting, full size human scan

https://www.dropbox.com/sh/3zmb8hqtd84at2n/AACU-YMBZmBT2lXusyXQ_FuIa?dl=0

feel free to use it however you would like...

hargrovecompany commented 4 years ago

I tried adding the second Prepare Dense Scene node, and added the folder location for normal images. I'm using version 2019.2.0 Here's a few shots of the workflow. I still seem to have my patterned lighting images used as texture... nodes folder

tpieco commented 4 years ago

@hargrovecompany The photos in the Normal folder are also projected photos.

hargrovecompany commented 4 years ago

wow.....well, that's embarrassing....sorry about that

tpieco commented 4 years ago

lol. No probs.

canonex commented 4 years ago

Why in some screenshot PrepareDenseScene has 2 output and in some images 1? The same for Texturing? I can't reproduce exactly the node setup in Projected Light Patterns...

I have also trouble because of this: https://github.com/alicevision/meshroom/issues/614#issuecomment-576946800 as I have commented, the node is causing error

Here is my setup: PrepareDense

Thank you, Riccardo

natowi commented 4 years ago

I have updated the wiki.

canonex commented 4 years ago

I made the stupid mistake of not renaming the files correctly... the bold text in the wiki helped me.

Thank you, Riccardo

hargrovecompany commented 4 years ago

Here's another set of images of a test subject (real American Rodeo Cowboy) I checked to make certain that it's correct....a set with pattern projection and a set with normal lighting, .35 seconds between https://www.dropbox.com/sh/rbegeqgihpp6xwj/AAAWZFLvBCG5PlPIk059vVJpa?dl=0 These are my images, so anyone here has my permission to use them

natowi commented 4 years ago

@tpieco can you share your results with your polarised and unpolarized photos like you did before? I would like to add the comparison to the wiki/documentation.

tpieco commented 4 years ago

@natowi here are some screenshots but I'm not sure if they're going to be useful.

The first screenshot is the result of using cross polarized photos.

The second screenshot is the result using unpolarized photos, which surprisingly gives a better mesh than polarized photos.

The third screenshot is the result of using polarized photos, with unpolarized photos used for texturing without adding a new PrepareDenseScene node.

The fourth screenshot is the result of using polarized photos, with unpolarized photos used for texturing with a new PrepareDenseScene node added.

polarized unpolarized combined combined2

natowi commented 4 years ago

@tpieco Thank you for this nice example. I see you have a red icon on your images that point to missing metadata or sensor information. Meshroom does a good estimation job, but adding this information can improve the overall accuracy.

tpieco commented 4 years ago

@natowi No problem. I shoot RAW photos and export them to PNGs using Capture One, which doesn't include the EXIF data. However I work out the correct FoV using an online calculator. Thanks for the lookout.

natowi commented 4 years ago

Solved and added to the wiki. https://github.com/alicevision/meshroom/wiki/Projected-Light-Patterns