bharat-b7 / MultiGarmentNetwork

Repo for "Multi-Garment Net: Learning to Dress 3D People from Images, ICCV'19"
285 stars 65 forks source link

Generating my dataset #49

Open rupang818 opened 3 years ago

rupang818 commented 3 years ago

Hello @bharat-b7, the work you've done here is fascinating - thank you!

I'm working on generating my own dataset, where I perform PGN semantic segmentation to generate the segmentation, and convert it into the input data that I can use for this MGN.

Here's the semantic segmentation that I was able to generate: mcilroy_finish_vis

Your README says some additional "manual" process was needed, and I was wondering how you've processed the outputs of the PGN semantic segmentation to generate your segmentation, like this: segmentation

In summary, I feel like these are the inputs that is needed, but I don't know how to generate:

Any further guidance would be greatly appreciated. Thank you!

rupang818 commented 3 years ago

Digging the repo a little bit further, I found these issues that might be related:

Please see the data processing steps in the readme. Once you get the segmentation labels from PGN, you just need to change the colours as follows: Pants (65, 0, 65), Short-Pants (0, 65, 65), Shirt (145, 65, 0), T-Shirt (145, 0, 65) and Coat (0, 145, 65), skin and hair are set as white. The colour choice was arbitrarily decided when training MGN, nothing technical about it.

Does the tuple represent the RGB value? I am currently not able to run the open pose due to some compilation issues, but once I do, I imagine I'll get a (2,25,3) shaped output that tells me about which of the 25 body parts each of the pixels should fall under