Open n-splv opened 6 months ago
+1, I am wondering how the intermediate model checkpoints are supposed to be used. They are saved during the fine-tuning, not during the classification phase of training. Thus, I'm interpreting they need to have their classifier trained afterward?
Update: I have tried evaluation using model checkpoints for my own model (cannot share code). The precision tanks drastically compared to the fully trained model, suggesting to me that the classifier head is not trained for checkpoints.
@n-splv If you are interested in using these checkpoints, the workaround for me was to actually train the classifier head ("filling in" the missing logic from Trainer.train()
)
model = SetFitModel.from_pretrained(<checkpoint>)
trainer = Trainer(
args=args,
model=model,
...
)
train_parameters = trainer.dataset_to_parameters(trainer.train_dataset)
trainer.train_classifier(*train_parameters, args=trainer.args)
Step 1: Train a model:
Step 2: Save the model explicitly. The examples in docs always do it, but there's no clear communication that this is absolutely necessary and, in fact, the only way to use the model later:
Step 3: Try to load from the latest checkpoint
Without any warning, this model will not perform well, because the classifier (head) weights have not been loaded or even saved in the first place. If we compare this model's head with the one we saved explicitly, the difference is obvious:
So if I didn't mess something up, my proposal would be to ether make this behavior clear to the user, or better to fix it so that the checkpoints would be usable.