Open zhuraromdev opened 3 months ago
Do u use the fine-tune code in IXC 2.0? It is different with the IXC 2.5 finetune code.
Hey, yeap, I am using the code form here: https://github.com/InternLM/InternLM-XComposer/blob/main/finetune/finetune.py
@yuhangzang I have tried to run the code with 2.0 version. However still getting the same error:
Traceback (most recent call last):
File "/home/ubuntu/InternLM-XComposer/InternLM-XComposer-2.0/finetune/finetune.py", line 318, in <module>
train()
File "/home/ubuntu/InternLM-XComposer/InternLM-XComposer-2.0/finetune/finetune.py", line 305, in train
trainer = Trainer(
File "/home/ubuntu/miniconda3/envs/intern_clean/lib/python3.9/site-packages/transformers/trainer.py", line 409, in __init__
raise ValueError(
ValueError: The model you want to train is loaded in 8-bit precision. if you want to fine-tune an 8-bit model, please make sure that you have installed `bitsandbytes>=0.37.0`.
Let me know, which additional information is needed, thank you!
Hello,
I have a question regarding fine tuning of quanitized internlm/internlm-xcomposer2-4khd-7b model. I have made quantization of 4khd model with lmdeploy, not trying to make fine tunning of this model. However, I am getting, such issue during this process. Do you have any suggestion, how I can solve it?
Env:
finetune_lora.sh
ds_config_zero2.json
data.txt
Traceback:
Also I am using the
finetune.py
without any changes.