Closed MengHao666 closed 3 years ago
Thanks. That code is for the pre-trained InterNet (snapshot_20.pth.tar), available in the Google drive.
Thanks. That code is for the pre-trained InterNet (snapshot_20.pth.tar), available in the Google drive.
so are all the results reported in the paper using 20 epoch's result, that is, one more epoch for training? I mean running your code will get snapshot_0.pth.tar to snapshot_19.pth.tar., which one should I use to compare result?
I think I changed the naming of the saved snapshots from 1~20 to 0~19 when releasing this code. You can compare snapshot_20.pth.tar with your results. snapshot_20.pth.tar might have slightly different results from the paper as we downsampled the images from 4K to 512x384 to prevent the fingerprint leak.
I think I changed the naming of the saved snapshots from 1~20 to 0~19 when releasing this code. You can compare snapshot_20.pth.tar with your results. snapshot_20.pth.tar might have slightly different results from the paper as we downsampled the images from 4K to 512x384 to prevent the fingerprint leak.
we now are doing erperiments on machine_annot subset, on which we could not find pre-trained checkpoints u provided. At now, u only provide H+M pre-trained checkpoints for v1.0. We can only get your result from runing your code while fllowing your configurations. And If all your results reported in the paper are from 4K images, it might not be fair enough, hope the difference is not too much.
Sorry. In the camera-ready version, I changed all numbers to 512 resolution. You can use the numbers in the paper.
As there are too many subsets and I personally think (H+M) is the final dataset without H-only and M-only ones, I decided to release only H+M.
thanks
the test code here
python test.py --gpu 0-3 --test_epoch 20 --test_set $DB_SPLIT
willl meet error, as the end epoch will be 19 following your code