I have used Qlora to fine tune the LLM model. I have seen some models on output folder. How can I run the output model and test them? I have tried to use some open source API. It seems requiring config.json to deploy this model on my server. How can I deploy the fine tuned model locally?
I have used Qlora to fine tune the LLM model. I have seen some models on output folder. How can I run the output model and test them? I have tried to use some open source API. It seems requiring config.json to deploy this model on my server. How can I deploy the fine tuned model locally?