Open Blaizzy opened 5 months ago
More details regarding error please. Were you also the one who posted a bnb issue on discord?
Any new update regarding this error? I have a similar issue
@NanoCode012
Could you let me know what else are you looking for?
Could someone post logs of the issue? Is it due to the check of quant_config?
Ayt, got it!
I will post the logs later today
@NanoCode012 Yes, for me the error is that the check of quant_config always raises a error because the quant_method is not gptq, and if i set gptq:false in the yaml, it raises a error that says i can't load a quantized model without gptq.
So if my model is previously BnB quantized i have no clue of how i can finetune with axolotl
@Blaizzy what was your fix?
@Blaizzy what was your fix?
I used a full precision model and set load_in_4bit:
to true
Example:
base_model: meta/llama-7b-hf
load_in_4bit: true
Whilst, I actually wanted to load a prequantized model.
base_model: meta/llama-7b-hf-4bit
Thanks
+1 id like to do the same (would be a nice addition)
Please check that this issue hasn't been reported before.
Expected Behavior
I want to load a BnB quantized model.
Current behaviour
It throws a ValueError.
Steps to reproduce
Launch the config yaml.
Config yaml
Possible solution
Extend or remove the fixed check of gptq introduced here: https://github.com/OpenAccess-AI-Collective/axolotl/pull/913
Which Operating Systems are you using?
Python Version
3.10
axolotl branch-commit
main
Acknowledgements