Open markussteindl opened 3 weeks ago
I guess this could be caused by the same as https://github.com/Lightning-AI/pytorch-lightning/issues/18060. The checkpoint callback is not the last callback called, and thus some loop counter are not updated. Have a look at the fields mentioned in https://github.com/Lightning-AI/pytorch-lightning/issues/18060#issuecomment-2080180970 and see if this explains the behavior you notice, it might also offer you a workaround.
Bug description
Testing from a given checkpoint leads to logging the epoch number of the last checkpoint instead of the checkpoint specified:
The second test logs epoch 10 instead of epoch 2. Similarly, the step number of the second test is incorrect.
What version are you seeing the problem on?
v2.2.1