open-mmlab / mmfashion

Open-source toolbox for visual fashion analysis based on PyTorch
https://open-mmlab.github.io/
Apache License 2.0
1.24k stars 281 forks source link

landmarks are given when evaluating the category/attributes prediction? #120

Open zyue1105 opened 3 years ago

zyue1105 commented 3 years ago

it seems to me that the annotated landmarks are given as the input of the category/attributes prediction benchmark, which is a bit odd since we don't have the landmarks annotated in the real world https://github.com/open-mmlab/mmfashion/blob/master/mmfashion/apis/test_predictor.py#L101.

wanted to confirm that the evaluation results are from the annotated landmarks or the predicted landmarks https://github.com/open-mmlab/mmfashion/blob/master/docs/MODEL_ZOO.md?

btw, the landmark size needs to be changed to be compatible with the roi model https://github.com/open-mmlab/mmfashion/blob/master/demo/test_cate_attr_predictor.py#L44, which caused the problem in those issues https://github.com/open-mmlab/mmfashion/issues?q=is%3Aissue+is%3Aopen+invalid, and furthermore, the landmark needs to be predicted before passing to the model if I understand correctly.