Closed luigibonati closed 8 months ago
@andrrizzi we looked into it, the problem is that when loading a checkpoint:
super().__init__(in_features=layers[0], out_features=layers[-1], **kwargs)
kwargs contains also in_features
and out_features
.
If we delete those keys before calling the init of the mother class it works:
kwargs.pop("in_features", None)
kwargs.pop("out_features", None)
super().__init__(in_features=layers[0], out_features=layers[-1], **kwargs)
but what I don't like is that we need to do this in every class that inherits from BaseCV.. do you have any suggestion?
If all the inherited CVs explicitly pass in/out_features
to BaseCV.__init__
based on some other init argument, an alternative might be to modify BaseCV.__init__
to call self.save_parameters(ignore=['in_features', 'out_features'])
. I'm not sure, but I seem to remember that only saved parameters are then restored from the checkpoint.
If only a handful are doing it, then we might add that save_parameters(ignore=...)
bit individually in the their __init__
.
Loading a DeepTDA CV from a checkpoint does not work:
Minimal (non)working example:
given an error in initialization:
we should also check the other CVs and add regtests for this feature (as of now only regressionCV was tested in this notebook: https://mlcolvar.readthedocs.io/en/stable/notebooks/tutorials/intro_3_loss_optim.html#Model-checkpointing)