arthurdouillard / CVPR2021_PLOP

Official code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation
https://arxiv.org/abs/2011.11390
MIT License
145 stars 23 forks source link

Question about validation data #5

Closed Nan-S closed 3 years ago

Nan-S commented 3 years ago

Hello, I have two questions about the validation data, 1) why the val_dst is using labels=list(labels),labels_old=list(labels_old), instead of the same as test_dst using labels=list(labels_cum)? 2) Why always save best model at the last iteration, how can we know the last iteration is always the best model? https://github.com/arthurdouillard/CVPR2021_PLOP/blob/381cb795d70ba8431d864e4b60bb84784bc85ec9/run.py#L449

arthurdouillard commented 3 years ago

Both are related to the original codebase there https://github.com/fcdl94/MiB and weren't code by me.

  1. Both are debatable although I feel having access to all cumulated labels is a bit "cheating", making it too easy if for example we wanted to do early stopping (we don't).
  2. The comment is indeed misleading: we are simply checkpointing the model regularly (https://github.com/arthurdouillard/CVPR2021_PLOP/blob/381cb795d70ba8431d864e4b60bb84784bc85ec9/run.py#L449) and after the final epoch (https://github.com/arthurdouillard/CVPR2021_PLOP/blob/381cb795d70ba8431d864e4b60bb84784bc85ec9/run.py#L482). So it's not necessarely at all the "best" model