Open amrahsmaytas opened 4 years ago
@amrahsmaytas Hi, 1/2. To run inference, just prepare an image, resize to the expected size, do normalization and switch to tensor as the data loader has done. Meanwhile prepare a one-hot attribute input indicating the attribute embedding you want the network to learn. 3/4. I have to do some data migration. I will share with you the download link once it's ready.
Thanks for quick reply,
can you , please guide me with more detail about creating an one hot attribute input, for eg: i want to detect a particular type of neckline given in the deepfashion, how to create an one hot attribute input for it?
Is one hot encodings same as embeddings used in paper?
@amrahsmaytas Hi, 1/2. To run inference, just prepare an image, resize to the expected size, do normalization and switch to tensor as the data loader has done.
can you give the loc to run inference after doing the above modification,
for eg (correct,if its wrong):
if i have created an new folder testimages which contains the images, i want to test, so does the loc:
python main.py --data_path ./testimages --model ASENet_V2 --resume ./ASENet_V2/checkpoint.pth.tar
will do the job, or is there any other loc , i have to add into it.
@amrahsmaytas An attribute embedding matrix is declared in the model class, the one-hot attribute input is used to index certain attribute embedding. One-hot input of each attribute is created according to the order of attributes in meta.json. For example, DeepFashion dataset has 5 attributes in total and "texture-related" is the first attribute, then its one-hot input is like [1, 0, 0, 0, 0].
@amrahsmaytas Hi, 1/2. To run inference, just prepare an image, resize to the expected size, do normalization and switch to tensor as the data loader has done.
can you give the loc to run inference after doing the above modification, for eg (correct,if its wrong): if i have created an new folder testimages which contains the images, i want to test, so does the loc:
python main.py --data_path ./testimages --model ASENet_V2 --resume ./ASENet_V2/checkpoint.pth.tar
will do the job, or is there any other loc , i have to add into it.
I'm sorry that our code is not able to directly run as above. You need to implement it yourself. Besides, our network generates image features for retrieval, you should split your own test images into queries and candidates.
@amrahsmaytas An attribute embedding matrix is declared in the model class, the one-hot attribute input is used to index certain attribute embedding. One-hot input of each attribute is created according to the order of attributes in meta.json. For example, DeepFashion dataset has 5 attributes in total and "texture-related" is the first attribute, then its one-hot input is like [1, 0, 0, 0, 0].
Does that mean, we can only retreive texture related attributes?
is it also possible to retrieve a particular kind of texture related attribute from an input image for eg: fuzzy, fury, smooth which are types of texture, from the one hot input as mentioned [1, 0,0,0,0]
If not, what one hot input should be, if I want to retrieve some 10 types of texture related attributes in deepfashion, Using the asen approach?
Final one, on this topic, is it even possible to retrieve some x types (let x be 10 here) of texture related attributes from deepfashion?
Thanks
@amrahsmaytas An attribute embedding matrix is declared in the model class, the one-hot attribute input is used to index certain attribute embedding. One-hot input of each attribute is created according to the order of attributes in meta.json. For example, DeepFashion dataset has 5 attributes in total and "texture-related" is the first attribute, then its one-hot input is like [1, 0, 0, 0, 0].
- Does that mean, we can only retreive texture related attributes?
- is it also possible to retrieve a particular kind of texture related attribute from an input image for eg: fuzzy, fury, smooth which are types of texture, from the one hot input as mentioned [1, 0,0,0,0]
- If not, what one hot input should be, if I want to retrieve some 10 types of texture related attributes in deepfashion, Using the asen approach?
- Final one, on this topic, is it even possible to retrieve some x types (let x be 10 here) of texture related attributes from deepfashion?
Thanks
I guess you are asking how to learn attribute values within that attribute, for eg, fuzzy, smooth are all attribute values under attribute texture-related attribute. Specific attribute values are implicitly learned by training triplets. Attribute values cannot be determined until the image is compared to other images.
@amrahsmaytas An attribute embedding matrix is declared in the model class, the one-hot attribute input is used to index certain attribute embedding. One-hot input of each attribute is created according to the order of attributes in meta.json. For example, DeepFashion dataset has 5 attributes in total and "texture-related" is the first attribute, then its one-hot input is like [1, 0, 0, 0, 0].
- Does that mean, we can only retreive texture related attributes?
- is it also possible to retrieve a particular kind of texture related attribute from an input image for eg: fuzzy, fury, smooth which are types of texture, from the one hot input as mentioned [1, 0,0,0,0]
- If not, what one hot input should be, if I want to retrieve some 10 types of texture related attributes in deepfashion, Using the asen approach?
- Final one, on this topic, is it even possible to retrieve some x types (let x be 10 here) of texture related attributes from deepfashion?
Thanks
I guess you are asking how to learn attribute values within that attribute, for eg, fuzzy, smooth are all attribute values under attribute texture-related attribute. Specific attribute values are implicitly learned by training triplets. Attribute values cannot be determined until the image is compared to other images.
Got it, is there any way, I can predict the attribute values within that attribute, during the comparison, and only getting their predicted attribute value within the attribute(not image) as the result?
What could be the possible modification for such case?
@ZJU-ZhangY @Maryeon
Can you please guide me through the steps to run the inference of the model with pretrained weights (i.e without downloading the deepfashion2, DARN or fashionai datasets)?
Is running inference possible with the code given or i have to do any modification in the main.py file to run the demo(inference)?
I have tried downloading DARN dataset, but it took me around 193gb for just 40k images where as the total images mentioned is around 2 lakhs , am i doing something wrong?
Could you please point me to the links, where i can download the DARN dataset / fashionai dataset directly (please share the links to greetsatyamsharma@gmail.com , if you dont want share them publicly)?
Thanks, Awaiting for a quick response.