BAAI-DCAI / Bunny

A family of lightweight multimodal models.
Apache License 2.0
874 stars 66 forks source link

Training the model throws an error after quantization #112

Open dingtine opened 1 month ago

dingtine commented 1 month ago

When I use 8-bit quantization in the pre-training process, the code throws an error.

You cannot perform fine-tuning on purely quantized models. Please attach trainable adapters on top of the quantized model to correctly perform fine-tuning. Please see: https://huggingface.co/docs/transformers/peft for more details

ababam commented 1 month ago

I delete model = model.merge_and_unload(), then everything is fine.