Open YonDraco opened 2 months ago
Sorry, I don't understand. Can you elaborate? Which checkpoint did you use?
@ashok-arjun I saved the best checkpoint epoch=36-step=1850.ckpt after turning and used the get_lag_llama_predictions function as in colab demo 2 to make predictions from this checkpoint. However, when loading this checkpoint to make predictions, the results are worse than when turning. So I think I will have to load both checkpoint and hparams.yml but I don't know how to handle them
Sorry, I don't understand what "turning" is. Do you mean finetuning or training?
I used best checkpoint after turning then used
get_lag_llama_predictions
function to predict from best checkpoint . However, the predicted result is much lower than when I turned and trained that checkpoint.