alicevision / Meshroom

3D Reconstruction Software
http://alicevision.org
Other
11.04k stars 1.07k forks source link

[question] Parameters for human body scanning #756

Open ihmc3jn09hk opened 4 years ago

ihmc3jn09hk commented 4 years ago

There are some question related to scanning of full human body. I have tried some setups but none give satisfying results. I started with 32 images with smaller final length (larger fov) for saving time on photo shooting. In FeatureExtraction, "SIFT" and "High". The result is pretty noisy and the arms and legs are not recognized. Then I added AKAZE for extraction, matching and SfM which gives a little better result yet the arms and legs are still disconnect. I increased to 64 images with the previous 32 large fov and 32 high focal length (small fov). There are 1-3 poses are missed in SfM. The resulting mesh is still nosie but the arms are better. However the legs are still broken. I guess there skin cannot be identified well. Any suggestion on what parameters are better for human body? Say dressed in beach-style. Or photo shooting techniques? Angle, lighting... etc.

natowi commented 4 years ago

This depends on your setup. Can you share a sample image? (Image can be taken from behind the person to maintain privacy)

fabiencastan commented 4 years ago

In Meshroom, you are looking at the result of the StructureFromMotion, which is the sparse reconstruction. You have to compute the Meshing to get the dense part (it will also generate a dense point cloud in ABC): see the "Ouput Dense Point Cloud" param on the Meshing node. You can also activate the "Save Raw Dense Point Cloud" to get all 3D points before the decision where to cut the surface (as the colmap output).

ihmc3jn09hk commented 4 years ago

This is the testing dummy while in real situation will be even more skin being exposed. (The face is broken, kind of scary so its censored ! ;).

I guess my settings in Meshroom is wrong which gave me the result with very few points, show in the 2nd figure. My guess is not from no where since I can get much denser point-cloud by using COLMAP with default setting. The 3rd and 4th images shows the result of the extracted point-cloud from COLMAP which has higher point density. But the legs and arms cannot be reconstructed as well. ( I am not sure, from the doc. of COLMAP, it uses only SIFT ?!? )

Back to Meshroom, even though I tweak up the describer to "high" with "akaze" and "cctag3", the SfM point-cloud is still less dense than the one show in 3rd and 4th images. What parameters are best fit to the case?

902744-Black-d4bf5a6e-

Meshroom ( Setting default ) 72341153-85a80780-3704-11ea-8b20-578c216b5f18

From COLMAP ( default ) 72340691-96a44900-3703-11ea-9495-14a91512fc67 72340692-96a44900-3703-11ea-8884-256068671539

ihmc3jn09hk commented 4 years ago

o get all 3D points before th

I see. Thanks for the reply. Could you please share some idea on how to get better reconstruction result for scenerio with mostly skin exposed to the view? ( In extreme, say naked )

natowi commented 4 years ago

the SfM point-cloud is still less dense

Unbenannsdsddt

You can open the node folder and drag-and-drop the densepointcloud in the 3d viewer to preview

--

How does the textured mesh look like for your dummy?

-- You are using a camera rig from what I can guess from the first image. What camera models are you using and how many? Does Meshroom register all cameras? Unbenajhnnt

ihmc3jn09hk commented 4 years ago

the SfM point-cloud is still less dense

Unbenannsdsddt

You can open the node folder and drag-and-drop the densepointcloud in the 3d viewer to preview

--

How does the textured mesh look like for your dummy?

-- You are using a camera rig from what I can guess from the first image. What camera models are you using and how many? Does Meshroom register all cameras? Unbenajhnnt

  • Check the sensordatabase information for your camera is correct
  • When capturing clothed models, choose well structured clothing if possible (swimsuit with pattern for example)
  • The area of interest for each camera should fill most of the image frame.
  • get as close as possible to your model
  • make sure your images have enough overlap
  • avoid light reflections on the skin

Ok, I have tried this dataset with the following result. The fingers are missing. The setting with "SIFT" and "AKAZE" at "high". What extra parameters should I config to achieve the result like this one? using the same dataset

sdfsdfs

natowi commented 4 years ago

In the ImageMatching node, modify Max Descriptors and Nb Matches to get more.

Also https://github.com/alicevision/meshroom/wiki/Reconstruction-parameters

fabiencastan commented 4 years ago

Using patterns projection, change a lot the precision you can get. Then do the texturing with a second set of pictures without projection. You have some datasets that you can download on http://www.pi3dscan.com to experiment with that.

natowi commented 4 years ago

Then do the texturing with a second set of pictures without projection.

This can be done by setting a second image set as imagesFolders input in the PrepareDenseScene node.

https://github.com/alicevision/meshroom/wiki/Projected-Light-Patterns

ihmc3jn09hk commented 4 years ago

In the ImageMatching node, modify Max Descriptors and Nb Matches to get more.

Also https://github.com/alicevision/meshroom/wiki/Reconstruction-parameters

I just tried setting them to 0 according to the suggestion yet still no luck. Any possibility of the order of images or pair?

ihmc3jn09hk commented 4 years ago

Using patterns projection, change a lot the precision you can get. Then do the texturing with a second set of pictures without projection. You have some datasets that you can download on http://www.pi3dscan.com to experiment with that.

Thats good idea. I can foresee the sync issue will have to address. But before using extra information, I am just curious on how to play around with the parameters in Meshroom to optimize for specific scenario. Since the author of the dataset can achieve pretty good result, I will treat that as a benchmark for tuning the parameters. Any improvement can be achieved by the parameters in the SfM node ? Are they co-related to some physical meanings?

hargrovecompany commented 4 years ago

I spent the past year building a full body scan rig. I can tell you from experience that there are not enough features (are we now calling them landmarks?) to get a lot of points for meshing. Projecting a pattern will make a HUGE difference. I use 4 projectors. Front and rear positioned just above knee level pointing slightly upwards..this helps assure that you get pattern in the crotch, under the arms, and under the chin. Projectors on the side are above head level, pointing downward. This help get the top of the head.

Important lessons learned....try to get your elapsed time between projected and normal lighting shot sequence to be less than 1/2 second. My experience is that normal movement (i.e. breathing) is generally not a big problem with an elapsed time less than 1/2 second, but after that the movement goes up exponentially. If someone moves only 1/4" inch from side to side during the shot sequence, the impact is huge. As an example, the pupils of the eyes are not going align when you use the normal images for texture over mesh created from projected images. The nostrils will look huge. Etc.
Also, its a really good idea to color calibrate your cameras and correct every image....you'll get a lot more matching points of the image to image color is consistent. Surprisingly, there is enough variation between one camera and the next (right out of the box) to justify the color calbration process.

hargrovecompany commented 4 years ago

Oh, one more thing.....lighting! Skin and a lot of fabrics are much more reflective than many might realize. Very evenly distributed lighting will keep you from having "white stripes" on, for example, bare legs after rendering.

ihmc3jn09hk commented 4 years ago

Important lessons learned....try to get your elapsed time between projected and normal lighting shot sequence to be less than 1/2 second. My experience is that normal movement (i.e. breathing) is generally not a big problem with an elapsed time less than 1/2 second, but after that the movement goes up exponentially. If someone moves only 1/4" inch from side to side during the shot sequence, the impact is huge. As an example, the pupils of the eyes are not going align when you use the normal images for texture over mesh created from projected images. The nostrils will look huge. Etc. Also, its a really good idea to color calibrate your cameras and correct every image....you'll get a lot more matching points of the image to image color is consistent. Surprisingly, there is enough variation between one camera and the next (right out of the box) to justify the color calbration process.

Oh, one more thing.....lighting! Skin and a lot of fabrics are much more reflective than many might realize. Very evenly distributed lighting will keep you from having "white stripes" on, for example, bare legs after rendering.

Thank you for sharing the exp.! Professional hardware setups. It sounds like a big problem on the synchronization issue on the cameras themselves as well as the projections. Moreover, don't you think using polarized light source is better for reducing the reflections on the skin and garments or even girls long shiny hair?

hargrovecompany commented 4 years ago

Sorry for taking so long to respond.... The question you had about polarized light sources.....I really can't answer that because I didn't use any polarization in my rig. I just added 12 lights distributed even around the rig. Initially i used open led light strip tape, but that wasn't good enough. I had to add plastic diffusers. I will try to find a finished 3d that will show you the "stripe" effect that the lights left on blue jeans even with 12 strips of diffused lights.

Even though it might not seem like it, you are very close to getting good results. I'll be glad to help if i can...

natowi commented 4 years ago

tpieco shared a nice comparison between results with/without polarized photos: https://github.com/alicevision/meshroom/issues/763#issuecomment-583904068

ihmc3jn09hk commented 4 years ago

Sorry for taking so long to respond.... The question you had about polarized light sources.....I really can't answer that because I didn't use any polarization in my rig. I just added 12 lights distributed even around the rig. Initially i used open led light strip tape, but that wasn't good enough. I had to add plastic diffusers. I will try to find a finished 3d that will show you the "stripe" effect that the lights left on blue jeans even with 12 strips of diffused lights.

Even though it might not seem like it, you are very close to getting good results. I'll be glad to help if i can...

@hargrovecompany Thank for so much for your help. I am trying the projectors setup from the suggestions. As I am using the dummy for testings, the legs can be reconstructed with around 60+ images. However, some relative tiny features are still problematic. I think the depth-map process removed the landmarks of tiny features because those tiny features/landmarks exist in the SfM step (Shown in the 3D view). I am finding which parameters can fix this. But the major issue now I am facing is the synchronization between cameras-cameras, cameras-projectors and projectors-projectors. I am building a small test rig ( 4 Pis with cams, 4 projectors ). Using the Picamera library with nodejs to communicate the trigger commands ( Capture, ProjectorOn ) to all Pis. The time of the Pi-cams START capturing the image vary in ~150ms and for projector on/off ~100ms as well. The timing difference is so random... From the suggestion of consecutive images should be taken between < 100ms, I cannot achieve this requirement since the START time error is too big comparatively on both cameras and projectors. And some other issues for the projector method, flare from the projectors, focus issue (E.g. pattern/noise cannot be focused at all position of the body say head to leg).

@natowi Thank you for the information on the polarized lighting for lumination. I can remove that idea from the TODO-List.

richard-bmc commented 3 years ago

blender We built a camera rig for scanning human body. However the result mesh usually has a thin wrist, sometimes it even disappeared.

depth_map After checking depth maps, which are the results of DepthMapFilter node, I found its quite noisy. Somehow I felt the right wrist in the depth map is cut.

I wonder is there any parameters I can change to enhance this result.

hand_only @hargrovecompany Since we found that photos have different brightness around the right wrist, color correction is our next move. Projecting patterns on human body is worth trying. I would like to try it if color correction doesn't work.

ohadOrbach commented 3 years ago

Hi, I'm trying also to make a human 3d model using meshroom.

Can you please share changes you did in settings of nodes (ver 2021) for beat performance?

Thank you.