Closed carlosrokk3r closed 3 years ago
model size for inference should be the same model_size albert-large-v2, the script automatically detects the saved model path so you dont have to pass it as parameter
model size for inference should be the same model_size albert-large-v2, the script automatically detects the saved model path so you dont have to pass it as parameter
May I ask why will the model detect the saved model path automatically? I don't see any code that specifies the model path, which makes me confused that how will the infer function load our fine-tuning model?
@h3553493 If I'm not mistaken, it loads the fine-tuned model in this line.
The load_state
function automatically searches for the best model/last checkpoint and loads it into the NN. This is called from the infer_from_trained
class, so the model you get has either the last checkpoint (by default) or the best model (if you change the load_state
call to have load_best=True
) loaded.
Hi, @plkmo, first thank you for this repo, it is really nice.
I've trained a model using your script, this way:
This generates a model in the data folder, called
task_test_model_best_1.pth.tar
.Now I want to infer using this model that I previously trained, I tried doing this the following way:
But when doing this, I'm getting the following error:
Am I doing this wrong? Could you please correct me if I'm using this script the wrong way?