LLukas22 / llm-rs-python

Unofficial python bindings for the rust llm library. 🐍❤️🦀
MIT License
71 stars 4 forks source link

How to convert LoRA adapters for using? #14

Closed sidharthiimc closed 1 year ago

sidharthiimc commented 1 year ago

from llm_rs import Llama

model = Llama("path/to/model.bin", lora_paths=["path/to/lora.bin"])

Need way to convert lora adapters as well

LLukas22 commented 1 year ago

At present, you must manually convert your LoRa weights through a script like this: convert-lora-to-ggml.py

With that said, it's important to note that LoRa support is likely limited to LLama-based models, as the LoRa-GGML format has not been standardized yet.

Could you please specify the architecture for which you are trying to create a LoRa?

sidharthiimc commented 1 year ago

I was using llama. And found performance not upto mark with 7B. I will stick to 8bits usage.