hiyouga / LLaMA-Factory

A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
25.85k stars 3.21k forks source link

fsdp-qlora yi-34B-chat throw error " ValueError: Cannot flatten integer dtype tensors" #3470

Closed hellostronger closed 2 months ago

hellostronger commented 2 months ago

Reminder

Reproduction

CUDA_VISIBLE_DEVICES=0,1 accelerate launch \ --config_file config.yaml \ src/train_bash.py \ --stage sft \ --do_train \ --model_name_or_path /workspace/models/Yi-34B-Chat \ --dataset law_with_basis \ --dataset_dir data \ --template default \ --finetuning_type lora \ --lora_target q_proj,v_proj \ --output_dir /workspace/ckpt/Yi-34B-Chat-sft \ --overwrite_cache \ --overwrite_output_dir \ --cutoff_len 1024 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 8 \ --lr_scheduler_type cosine \ --logging_steps 10 \ --save_steps 100 \ --eval_steps 100 \ --evaluation_strategy steps \ --load_best_model_at_end \ --learning_rate 5e-5 \ --num_train_epochs 3.0 \ --max_samples 3000 \ --val_size 0.1 \ --quantization_bit 4 \ --plot_loss \ --fp16

config.yaml

compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
  fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
  fsdp_backward_prefetch: BACKWARD_PRE
  fsdp_cpu_ram_efficient_loading: true
  fsdp_forward_prefetch: false
  fsdp_offload_params: true
  fsdp_sharding_strategy: FULL_SHARD
  fsdp_state_dict_type: FULL_STATE_DICT
  fsdp_sync_module_states: true
  fsdp_use_orig_params: false
machine_rank: 0
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false

Expected behavior

fsdp qlora yi-34B-chat

System Info

transformers 4.39.3 torch 2.1.2 cuda 121 python3.8

Others

image image image

hellostronger commented 2 months ago

have seen exist issue written in March,but i cannot get any useful info to find out why this error came,hoping your suggestion

hiyouga commented 2 months ago

please provide your version of accelerate and bitsandbytes

hellostronger commented 2 months ago

@hiyouga accelerate==0.28.0 bitsandbytes==0.43.0 ,Do these versions have any problems?hoping your suggestion

hiyouga commented 2 months ago

did you use the latest code?

etemiz commented 2 months ago

While I am trying to train https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-70b I am getting the same error "ValueError: Cannot flatten integer dtype tensors". The error seems to be resolved when I reinstalled LLaMA-Factory again. These are the versions:

accelerate 0.29.3 bitsandbytes 0.43.1

hellostronger commented 2 months ago

@hiyouga sorry,my answer is so late this case, using newest llama_factory code, it work currently right now