During fine-tuning the models are automatically saved, the best being, say
\lag-llama-main\lightning_logs\version_12\checkpoints\epoch=34-step=5678.ckpt'
(this is displayed on the console)
For subsequent prediction, with zero-shot, should I provide LagLlamaEstimator(...) with
epoch=34-step=5678.ckpt instead of lag-llama.ckpt ? Thank you.
Hi, if you're using finetuning, then the evaluation is no longer "zero-shot". And yes, you should provide the right (latest) checkpoint if you want to evaluate results on that checkpoint.
During fine-tuning the models are automatically saved, the best being, say \lag-llama-main\lightning_logs\version_12\checkpoints\
epoch=34-step=5678.ckpt
' (this is displayed on the console)For subsequent prediction, with zero-shot, should I provide
LagLlamaEstimator(...)
withepoch=34-step=5678.ckpt
instead oflag-llama.ckpt
? Thank you.