As of now, the trainers have the option to load a pretrained model using the flag --model.
We haven't really used this feature so far, and the way it is implemented (as a flag) makes it hard to handle in the grid trainers, which results in not handling this for now.
So I like to handle this in the grid trainers, because it useful to load a pretrained model for each experiment (e.g. to finetune a pre-trained model).
I'm thinking of either:
adding a corresponding --models to the grid trainers, where the user could then use to indicate the trained models he wants to reuse, but this could be messy as we would have to check if all models are present, with which experiments they are compatible etc..
Moving --model in the trainers from being a flag to a config parameter, that the user could specify in the config file. I know that this wouldn't be consistent with the tester, but I find this cleaner, and easier to handle in the grid trainers.
As of now, the trainers have the option to load a pretrained model using the flag
--model
. We haven't really used this feature so far, and the way it is implemented (as a flag) makes it hard to handle in the grid trainers, which results in not handling this for now.So I like to handle this in the grid trainers, because it useful to load a pretrained model for each experiment (e.g. to finetune a pre-trained model). I'm thinking of either:
--models
to the grid trainers, where the user could then use to indicate the trained models he wants to reuse, but this could be messy as we would have to check if all models are present, with which experiments they are compatible etc..--model
in the trainers from being a flag to a config parameter, that the user could specify in the config file. I know that this wouldn't be consistent with the tester, but I find this cleaner, and easier to handle in the grid trainers.We can discuss that :slightly_smiling_face: