bharat-b7 / MultiGarmentNetwork

Repo for "Multi-Garment Net: Learning to Dress 3D People from Images, ICCV'19"
286 stars 65 forks source link

Please guide on full pipeline from image to dressing #60

Open AlyanQ opened 3 years ago

AlyanQ commented 3 years ago

There is not nearly enough information in the readme or even the issues to explain clearly how someone would go from having an image and applying clothing to a 3D model from the image. In the readme, the author has mentioned that the input is an image and keypoints but has explained in no way what the input format is, where the input is supposed to go, or how to even transform our data into data that can be forwarded by the model. From what I have been able to gather, the pipeline looks something like this.

  1. Acquire an image of a user and crop it to 720x720
  2. Run CIHP-PGN program on the image to acquire part segmentation (I assume for garment parts)
  3. Run OpenPose on image to acquire pose_keypoints (I assume to set pose for the SMPL model)
  4. Transform all of this data, somehow into the same format as assets/test_data.pkl is formatted (I have 0 clue how to do this, so do many other people)
  5. Run some program in the repo on this test_data.pkl file to acquire a 3D mesh with clothing (I have no clue what script needs to be run to go from user data to generated mesh with clothing)

Especially in step 5, there are 4 scripts in total, none of which have clearly defined input and output parameters to explain how to use them effectively.

Please advise on a structured pipeline from start to finish to run proper inference on random example data and correct my methodology if I am making any errors. As of now, step 4 and 5 are the real problems for me since there is no guidance as to how to format user data to input into the network and no guidance on what to use to generate clothed meshes. Thanks.