iusztinpaul / hands-on-llms

🦖 𝗟𝗲𝗮𝗿𝗻 about 𝗟𝗟𝗠𝘀, 𝗟𝗟𝗠𝗢𝗽𝘀, and 𝘃𝗲𝗰𝘁𝗼𝗿 𝗗𝗕𝘀 for free by designing, training, and deploying a real-time financial advisor LLM system ~ 𝘴𝘰𝘶𝘳𝘤𝘦 𝘤𝘰𝘥𝘦 + 𝘷𝘪𝘥𝘦𝘰 & 𝘳𝘦𝘢𝘥𝘪𝘯𝘨 𝘮𝘢𝘵𝘦𝘳𝘪𝘢𝘭𝘴
MIT License
3k stars 462 forks source link

Using fine-tuned model for inference #74

Closed dvquy13 closed 6 months ago

dvquy13 commented 6 months ago

Hi @iusztinpaul,

Love the course so far!

I have a question: Shouldn't us use our fine-tuned model for inferenced instead of using Paul's peft model here? https://github.com/iusztinpaul/hands-on-llms/blob/5c1887b4872df749e8863d2fb85f2f456b6af9ac/modules/training_pipeline/configs/dev_inference_config.yaml#L2

If yes then how should we publish our model from experiment to Comet Model Registry? Is it done manually via the Register Model button in the Comet experiment console view?

image

Thanks!

iusztinpaul commented 6 months ago

Hello,

Happy to hear that!

Yes, you got that right! I added my own version to help you test things out, but ideally, you should use your own version.

You should pick the fine-tuned model and register it manually, as you suggested in the SS.

Just be careful to add a name and version that make sense and change them in the yaml file. When using your own fine-tuned model, it will be something like: yourname/your-model-name:your-version

dvquy13 commented 6 months ago

Got it working.