lxe / simple-llm-finetuner

Simple UI for LLM Model Finetuning
MIT License
2.05k stars 132 forks source link

How do I merge trained Lora an Llama7b weight? #41

Open Gitterman69 opened 1 year ago

Gitterman69 commented 1 year ago

How do I merge trained Lora an Llama7b weight? Is there a script? Would make it much easier to share weights, increase portability, file management, etc….

Would be an amazing feature of the training tab as well!

Gitterman69 commented 1 year ago

https://github.com/clcarwin/alpaca-weight

This might be useful?! Im trying this today and will report back

alior101 commented 1 year ago

I'm looking for that option too.. I want to merge back into HF weights so I can convert to ggml and use with llama.cpp eventualy. Did you manage to do it with the scriots in alpaca-weight repo ?