Open rrtjr opened 2 years ago
Testing it further includes that this scenario only happens with the default ranger optimizer. Setting it to adam works just as fine.
Hi, I also encountered this issue and I found the reason why it is happening.
First at all, with pytorch-forecasting==0.9.2 this problem does not occur.
The reason is that for pytorch-forecasting>=0.10.0 the __setstate__
method in Ranger
has changed:
However, pytorch is calling __setstate__
like this:
self.__setstate__({'state': state, 'param_groups': param_groups})
So in __setstate__
in Ranger
it should be something like this:
self.radam_buffer = state["state"]["radam_buffer"]
Hi, I also encountered this issue and I found the reason why it is happening.
First at all, with pytorch-forecasting==0.9.2 this problem does not occur.
The reason is that for pytorch-forecasting>=0.10.0 the
__setstate__
method inRanger
has changed:However, pytorch is calling
__setstate__
like this:
self.__setstate__({'state': state, 'param_groups': param_groups})
So in
__setstate__
inRanger
it should be something like this:
self.radam_buffer = state["state"]["radam_buffer"]
This bug still exists in version 0.10.3. We can add state = state["state"]
in line 133 of optim.py to temporarily avoid key errors.
Expected behavior
I am currently fitting my TFT model and it works fine as it is initially. However, the process was interrupted so I added
ckpt_path
to resume training. After adding theckpt_path
, I am getting a key error. I expect the fitting should just continue after I add the checkpoint path.Actual behavior
Added
ckpt_path
so my model can resume training.I am getting this error when I do so:
Here is the stack trace for your reference:
Code to reproduce the problem
Relevant code for your reference as well
I appreciate all the support I can get, thank you very much.