Closed anirudhs001 closed 2 years ago
This is the mapping from clothing color and fabric annotations to texture attributes.
Most of the texture attributes follow the above mappings. The merged annotations of some images do not follow the above mapping due to two reasons: 1) We did some further data cleaning of the 512x256 version of images. The color and fabric annotations are annotated on 1024x512 images by annotators. Some patterns would appear differently due to the downsampling operation, therefore we did the data cleaning. 2) Also, we corrected some errors made by annotators for color and fabric annotations if observed (The ratio of incorrectly labeled annotations is small).
Hope the answer addresses your question. Thanks!
Thanks for sharing this! And how do you generate the shape attributes? For each image, there are 12 shape attributes in the DeepFashion-Multimodal dataset and 15 in the processed dataset. For e.g.:
Hi, for the shape annotations in the processed dataset, the definitions are:
attr_names_list = [
'gender', 'hair length', '0 upper clothing length',
'1 lower clothing length', '2 socks', '3 hat', '4 eyeglasses', '5 belt',
'6 opening of outer clothing', '7 upper clothes', '8 outer clothing',
'9 skirt', '10 dress', '11 pants', '12 rompers'
]
The gender is obtained by parsing the file name, the hair length is obtained according to the parsing maps. Attributes 0 - 6 are the subset of shape annotations of DeepFashion-MultiModal. Attributes 7-12 are the presence of the class, i.e., 1 represents presence and 0 represents absence.
Alright. thanks a lot!
In the processed dataset, how are the fused texture attributes generated from the clothing and fabric annotations? There does not seem to be a one-to-one mapping between (clothing, fabric) and the fused texture annotations. For e.g.,