Closed erland-ramadhan closed 4 months ago
When efficient_finetuning argument is set to lora when initializing the model, it generated an error when importing prepare_model_for_int8_training. So, I tried to change it to prepare_model_for_kbit_training.
efficient_finetuning
lora
prepare_model_for_int8_training
prepare_model_for_kbit_training
When
efficient_finetuning
argument is set tolora
when initializing the model, it generated an error when importingprepare_model_for_int8_training
. So, I tried to change it toprepare_model_for_kbit_training
.