gulvarol / surreal

Learning from Synthetic Humans, CVPR 2017
http://www.di.ens.fr/willow/research/surreal
Other
587 stars 107 forks source link

Reproducing the numbers on the synthetic test set #9

Closed mtli closed 6 years ago

mtli commented 6 years ago

Hi, I am trying to reproduce the segmentation results on the synthetic test set as said in page 5 of the paper (69.13% IoU, 80.61% accuracy). However, I couldn't match those numbers using either the pre-trained model nor a model trained from scratch, and they are 2~3% lower for both metrics. The only thing I changed from the out-of-the-shelf code is the dataRoot parameters. Could you shed some light on reproducing those numbers?

gulvarol commented 6 years ago

Hi, at that time I was working with the .png images which I couldn't release because of space problems. Maybe the difference comes from the compression on the .mp4 files. I haven't checked it myself.

mtli commented 6 years ago

I see. I wonder if could run your released model on your released dataset and report the numbers? That would be helpful for me to set up a baseline. Thanks!

gulvarol commented 6 years ago

Sorry for the delay, I did a quick test:

With 8 stacks, on the synthetic test set of mp4 images:

    |  mp4 model  | png model
------------------------------
IOU |    66.66    |  61.14
Acc |    77.99    |  72.76

Note that these numbers are on 64x64 output resolution, I didn't run the evaluation code after upsampling to 256x256. For the paper, it was (68.77, 80.93) vs (69.13, 80.61) with 64x64 and 256x256 resolutions, respectively.

So training/testing on mp4 images is indeed 2-3% less than training/testing on png images.

mtli commented 6 years ago

This is the same with the number I am getting. Thanks for the verification!