arthurdouillard / CVPR2021_PLOP

Official code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation
https://arxiv.org/abs/2011.11390
MIT License
145 stars 23 forks source link

Reproduce problem on 15-1 #3

Closed ygjwd12345 closed 3 years ago

ygjwd12345 commented 3 years ago

I reproduce 15-1 setting the result is:   | 1-15 | 16-20 | mIoU | all mib | 35.1 | 13.5 | 29.7 | - PLOP(reproduce train) | 64.94 | 20.77 | 53.90 | 55.16 PLOP(reproduce test) | 65.01 | 16.85 | 52.97 | 54.11 PLOP(paper) | 65.12 | 21.11 | 54.64 | 67.21 it is reasonable for train but for add --test the result drops too much.

ygjwd12345 commented 3 years ago

In other words, resutl is different between the test after the whole training or test it by using the same script and parameters but using the command --test

arthurdouillard commented 3 years ago

Hello, thanks for your report, it's indeed weird. I haven't found it before because I was always re-training all steps.

I've made a hot fix where we always reload the saved weight (https://github.com/arthurdouillard/CVPR2021_PLOP/commit/381cb795d70ba8431d864e4b60bb84784bc85ec9) but I'll try to undertstand later why does it happen.

I've just re-run a 15-1 overlap plop and here are my results with training mode and with testing mode as you did:

(old/new/all/avg)

Train mode: 66.4 / 19.31 / 55.19 / 67.09 Test mode: 66.4 / 19.31 / 55.19 / 67.09

Results are a bit different from paper results (a bit better in old, and a bit worse in new, overall better than paper results), but that's expected as I've run this on another machine than the one I've used for the paper.