johnsmith0031 / alpaca_lora_4bit

MIT License
533 stars 84 forks source link

this repo support 2bit finetuning the llama model? Is there any case to show how to run the scripts? #122

Open zlh1992 opened 1 year ago

zlh1992 commented 1 year ago

By the way is there any 2-bit model weights of llama7b,13b,33b,65b?

johnsmith0031 commented 1 year ago

You can use load_llama_model_4bit_low_ram and set bits=2. And I haven't seen any 2 bits weight for llama model as well, also curious about it.