Open wingniuqichao opened 4 years ago
The vertex labels are dependent on the garment classes present in your image. Eg. if your image has t-shirt and pants just take these entries from the test_data.pkl
@wingniuqichao is your issue resolved? Other issues have discussed this in more depth, and bharat has written longer, more detailed answers
Actually, can this info be added to the Readme? Would be much more convenient to have it in one place, rather than digging through issues.
I'm still confused on this. Some help, please?
Bharat said:
vertexlabel is derived from the garment classes present in your image_x. Eg.: If the image contains T-shirt and Pants, just select the vertices corresponding to these classes from allTemplate_withBoundaries_symm.pkl.
So the vertexlabel in test_data.pkl
should match one of the arrays from allTemplate_withBoundaries_symm.pkl
, right? However, when I checked np.all(test_data["vertexlabel"] == template[GARMENT][1])
for each GARMENT in the dictionary's keys, where [1]
is to select the array and not the psbody mesh, nothing returned True
. My first question is: which garment label is featured in test_data.pkl
?
Second question: What is meant specifically by "select the vertices" from allTemplate_withBoundaries_symm.pkl
? How should these be combined? Should we stack the T-shirt array with the Pants array along the last axis? How do we format it properly for the network?
Last question: what would be the best way to create vertexlabel automatically from the output of the Part Grouping Network? I guess we'd have to check the label map for the existence of any of Pants (65, 0, 65), Short-Pants (0, 65, 65), Shirt (145, 65, 0), T-Shirt (145, 0, 65), Coat (0, 145, 65)
. However, I notice the keys in the allTemplate
are named like "TShirtNoCoat", implying that a coat cannot be paired with a T-Shirt. But what if the label map does detect a coat over a T-shirt, then what should we put as the vertexlabel?
This sounds a bit complicated---is there existing code for this, or code to run on user data in general, that can be shared, please?
Thanks a lot!
~Update: FYI currently I have tried generating vertexlabel using my best understanding of Bharat's descriptions. I get the error: ValueError: A merge layer should be called on a list of at least 2 inputs. Got 1 inputs.
. I must have done something wrong.~ Nevermind, found out this tidbit was a separate problem of me trying to run inference on an image with no pose.
I'm still confused on this. Some help, please?
Bharat said:
vertexlabel is derived from the garment classes present in your image_x. Eg.: If the image contains T-shirt and Pants, just select the vertices corresponding to these classes from allTemplate_withBoundaries_symm.pkl.
So the vertexlabel in
test_data.pkl
should match one of the arrays fromallTemplate_withBoundaries_symm.pkl
, right? However, when I checkednp.all(test_data["vertexlabel"] == template[GARMENT][1])
for each GARMENT in the dictionary's keys, where[1]
is to select the array and not the psbody mesh, nothing returnedTrue
. My first question is: which garment label is featured intest_data.pkl
?
Garment templates are defined as subset of vertices on high-resolution SMPL (with 27554 vertices). So template[<GARMENT>][1]
returns a binary array where entries marked 1
imply that these SMPL vertices belong to the template. This association to SMPL is defined to enable putting and rigging garments with SMPL.
Second question: What is meant specifically by "select the vertices" from
allTemplate_withBoundaries_symm.pkl
? How should these be combined? Should we stack the T-shirt array with the Pants array along the last axis? How do we format it properly for the network?
Sample code to generate vertex_label for the first example in test_data.pkl
test_data["vertexlabel"]=np.zeros((1,27554,1)) #batch_size x vertices x 1
The first example contains Pants and T-shirt (1th and 5th entry in allTemplate_withBoundaries_symm.pkl, 0 is kept for skin)
test_data["vertexlabel"][np.where(template['Pants'][1])[0]] = 1
test_data["vertexlabel"][np.where(template['TShirtNoCoat'][1])[0]] = 5
Last question: what would be the best way to create vertexlabel automatically from the output of the Part Grouping Network? I guess we'd have to check the label map for the existence of any of
Pants (65, 0, 65), Short-Pants (0, 65, 65), Shirt (145, 65, 0), T-Shirt (145, 0, 65), Coat (0, 145, 65)
.
The vertexlabel depends only on what garments are present in the image. Just edit the above code based on the garments that PGN found in the image. You do not need label map for vertexlabel.
However, I notice the keys in the
allTemplate
are named like "TShirtNoCoat", implying that a coat cannot be paired with a T-Shirt. But what if the label map does detect a coat over a T-shirt, then what should we put as the vertexlabel?
Currently MGN does not support layered clothing such as t-shirt under coat. For such cases we keep the label of the outer garment (coat).
This sounds a bit complicated---is there existing code for this, or code to run on user data in general, that can be shared, please?
Using the above inline snippet:
test_data["vertexlabel"]=np.zeros((1,27554,1)) #batch_size x vertices x 1
detected_garments = [<list of garment categories detected by PGN>]
for n, gar in enumerate(template.keys()):
if gar in detected_garments:
test_data["vertexlabel"][np.where(template[gar][1])[0]] = n + 1 # 0 is kept for skin
Thanks a lot!
~Update: FYI currently I have tried generating vertexlabel using my best understanding of Bharat's descriptions. I get the error:
ValueError: A merge layer should be called on a list of at least 2 inputs. Got 1 inputs.
. I must have done something wrong.~ Nevermind, found out this tidbit was a separate problem of me trying to run inference on an image with no pose.
Thank you very much Bharat! This is an excellent and thorough explanation.
Given pose estimation (via Openpose) and segmentation (via PGN) as well as manually set garment's category, is there existing code to convert these data to test_data.pkl?
Maybe we can gather all the necessary steps to make it easier for generating user-specified data.
Given pose estimation (via Openpose) and segmentation (via PGN) as well as manually set garment's category, is there existing code to convert these data to test_data.pkl?
Maybe we can gather all the necessary steps to make it easier for generating user-specified data.
Excellent suggestion. If someone already implemented this please shoot me an email.
@Frank-Dz I have it, but it's still in progress. Shoot me an e-mail at nathanbendich@gmail.com
@neonb88 Hi, Would you please share the code that converts the users data to test_data.pkl with me? I have just shoot an email to you. My e-mail is xiezhy6@mail2.sysu.edu.cn.
@neonb88 Hi, Would you please share the code that converts the users data to test_data.pkl with me? I have just shoot an email to you. My e-mail is lxz@tju.edu.cn
@neonb88 Hi, Would you please share the code that converts the users data to test_data.pkl with me? I have just shoot an email to you. My e-mail is a75482022@gmail.com
@neonb88 hi, I am trying to do exactly mimic generating the data shown on the test_data.pkl with my own. Would you share your code for that with me too? Rupang818@gmail.com thanks!
Has anyone received the code to convert the data to test_data.pkl? I would really like to know please. Thanks
I have not received the code, so I do it by myself
I donot know how to get this value for my pictures, Can you help me? Thanks.