shuxjweb / pixel_sampling

24 stars 5 forks source link

PRCC test performance in same cloth setting. #11

Closed jjunnii closed 1 year ago

jjunnii commented 1 year ago

image Hi, I'm jjun. I have a question after reading your research. If we test the pretrained model you uploaded, the same performance comes out in change cloth setting. However, when in the same cloth setting is used, the rank is 99.6% and the mAP is 95.6%, which is slightly lower than the performance of this paper. Can you tell me where the problem came from? This is the same for PCB(rank1:99.4%, mAP:95.3%), MGN(rank1:99.6%, mAP:98.4%), and HPM(rank1:98.9%, mAP:93.2%) in same cloth setting.

shuxjweb commented 1 year ago

The difference comes from the model selection during training.

The uploaded models are just for the cloth-changing setting. We do not utilize the same model to conduct evaluation in both cloth-changing and cloth-same settings. As the model is cached based on the evaluation results during training, you need to train another model and change the test set in "train_prcc_pixel_sampling.py" as follows: Cloth-changing Setting: do_train(cfg, model, train_loader, val_loader_c, optimizer, scheduler, loss_func, num_query_c, start_epoch, acc_best, lr_type='step')

Cloth-same Setting: do_train(cfg, model, train_loader, val_loader_b, optimizer, scheduler, loss_func, num_query_b, start_epoch, acc_best, lr_type='step')