shuxueslpi / chatGLM-6B-QLoRA

使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。
350 stars 46 forks source link

ChatGLM3-6B 微调报错 RuntimeError: CUDA error: invalid argument #45

Closed ALLinLLM closed 11 months ago

ALLinLLM commented 11 months ago

我是在huggingface/transformers-pytorch-gpu:4.29.1镜像中操作的,例子的chatglm-6b可以正常微调,推理。 但是,ChatGLM3-6B 微调报错。我是从modelscope上下载的模型 https://modelscope.cn/models/ZhipuAI/chatglm3-6b/summary 我的训练配置json:

{
    "output_dir": "saved_files/chatglm3_6b_qlora_t32",
    "per_device_train_batch_size": 4,
    "gradient_accumulation_steps": 8,
    "per_device_eval_batch_size": 4,
    "learning_rate": 1e-3,
    "num_train_epochs": 10,
    "lr_scheduler_type": "linear",
    "warmup_ratio": 0.1,
    "logging_steps": 1,
    "save_strategy": "steps",
    "save_steps": 500,
    "evaluation_strategy": "steps",
    "eval_steps": 500,
    "optim": "adamw_torch",
    "fp16": false,
    "remove_unused_columns": false,
    "ddp_find_unused_parameters": false,
    "seed": 42
}

我的训练命令:

python3 train_qlora.py \
--train_args_json chatglm3_6b_qlora.json \
--model_name_or_path /share/public/huggingface_cache/ZhipuAI/chatglm3-6b \
--train_data_path data/futures_train.jsonl \
--eval_data_path data/futures_dev.jsonl \
--lora_rank 4 \
--lora_dropout 0.05 \
--compute_dtype fp32

报错日志:

`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
Traceback (most recent call last):
  File "train_qlora.py", line 209, in <module>
    train(args)
  File "train_qlora.py", line 203, in train
    trainer.train(resume_from_checkpoint=resume_from_checkpoint)
  File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1645, in train
    return inner_training_loop(
  File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1938, in _inner_training_loop
    tr_loss_step = self.training_step(model, inputs)
  File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2770, in training_step
    self.accelerator.backward(loss)
  File "/usr/local/lib/python3.8/dist-packages/accelerate/accelerator.py", line 1821, in backward
    loss.backward(**kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/_tensor.py", line 487, in backward
    torch.autograd.backward(
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/__init__.py", line 200, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/function.py", line 274, in apply
    return user_fn(self, *args)
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/checkpoint.py", line 157, in backward
    torch.autograd.backward(outputs_with_grad, args_with_grad)
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/__init__.py", line 200, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
ALLinLLM commented 11 months ago

cuda库版本低了,cuda12.1 torch 2.1.0 不报这个错了