Closed hellostronger closed 2 months ago
have seen exist issue written in March,but i cannot get any useful info to find out why this error came,hoping your suggestion
please provide your version of accelerate and bitsandbytes
@hiyouga accelerate==0.28.0 bitsandbytes==0.43.0 ,Do these versions have any problems?hoping your suggestion
did you use the latest code?
While I am trying to train https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-70b I am getting the same error "ValueError: Cannot flatten integer dtype tensors".
The error seems to be resolved when I reinstalled LLaMA-Factory again. These are the versions:
accelerate 0.29.3 bitsandbytes 0.43.1
@hiyouga sorry,my answer is so late this case, using newest llama_factory code, it work currently right now
Reminder
Reproduction
CUDA_VISIBLE_DEVICES=0,1 accelerate launch \ --config_file config.yaml \ src/train_bash.py \ --stage sft \ --do_train \ --model_name_or_path /workspace/models/Yi-34B-Chat \ --dataset law_with_basis \ --dataset_dir data \ --template default \ --finetuning_type lora \ --lora_target q_proj,v_proj \ --output_dir /workspace/ckpt/Yi-34B-Chat-sft \ --overwrite_cache \ --overwrite_output_dir \ --cutoff_len 1024 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 8 \ --lr_scheduler_type cosine \ --logging_steps 10 \ --save_steps 100 \ --eval_steps 100 \ --evaluation_strategy steps \ --load_best_model_at_end \ --learning_rate 5e-5 \ --num_train_epochs 3.0 \ --max_samples 3000 \ --val_size 0.1 \ --quantization_bit 4 \ --plot_loss \ --fp16
config.yaml
Expected behavior
fsdp qlora yi-34B-chat
System Info
transformers 4.39.3 torch 2.1.2 cuda 121 python3.8
Others