Open licy02 opened 1 year ago
Can you elaborate on what you meant by output?
I had some issues with my previous statement. What I meant was the model weights after fine-tuning. In previous projects I've worked on, they would only store the LoRA weights after fine-tuning. However, in this project, it stored all weight parameters after fine-tuning, and I would like to inquire whether it was an issue with my fine-tuning or if it was originally intended to be this way.
Hi @licy02,
In the Lora project, storing all weight parameters post-fine-tuning is intentional and differs from some previous projects. This is to provide the complete model state for flexibility and use-case coverage. Hope this helps!
We use lora , is the output the whole model