unslothai / unsloth

Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
15.22k stars 1.02k forks source link

16bit quantization bug #302

Open danielhanchen opened 5 months ago

danielhanchen commented 5 months ago
/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py in to_dict(self)
    910         if hasattr(self, "quantization_config"):
    911             output["quantization_config"] = (
--> 912                 self.quantization_config.to_dict()
    913                 if not isinstance(self.quantization_config, dict)
    914                 else self.quantization_config

AttributeError: 'NoneType' object has no attribute 'to_dict'
alparslanahmed commented 3 months ago

Any progress?