ymcui / Chinese-LLaMA-Alpaca-2

中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Apache License 2.0
7.04k stars 581 forks source link

指令精调报错 #434

Closed CCzzzzzzz closed 8 months ago

CCzzzzzzz commented 9 months ago

提交前必须检查以下项目

问题类型

模型训练与精调

基础模型

Chinese-Alpaca-2 (7B/13B)

操作系统

Windows

详细描述问题

lr=1e-4
lora_rank=64
lora_alpha=128
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05

pretrained_model=D:\\pyProjects2\\Retrieve-Rewrite-Answer\\pretrain\\chinese-alpaca-2-7b-hf
chinese_tokenizer_path=D:\\pyProjects2\\Retrieve-Rewrite-Answer\\pretrain\\chinese-alpaca-2-7b-hf
dataset_dir=D:\\pyProjects2\\Retrieve-Rewrite-Answer\\finetune-llama\\ZJQA\\train
per_device_train_batch_size=1
per_device_eval_batch_size=1
gradient_accumulation_steps=8
max_seq_length=512
output_dir=./output_dir/alpaca-7b
validation_file=D:\pyProjects2\Retrieve-Rewrite-Answer\finetune-llama\ZJQA\dev.json

deepspeed_config_file=ds_zero2_no_offload.json

torchrun --nnodes 1 --nproc_per_node 1 run_clm_sft_with_peft.py \
    --deepspeed ds_zero2_no_offload.json \
    --model_name_or_path ${pretrained_model} \
    --tokenizer_name_or_path ${chinese_tokenizer_path} \
    --dataset_dir ${dataset_dir} \
    --per_device_train_batch_size ${per_device_train_batch_size} \
    --per_device_eval_batch_size ${per_device_eval_batch_size} \
    --do_train \
    --do_eval \
    --seed $RANDOM \
    --fp16 \
    --num_train_epochs 10 \
    --lr_scheduler_type cosine \
    --learning_rate ${lr} \
    --warmup_ratio 0.03 \
    --weight_decay 0 \
    --logging_strategy steps \
    --logging_steps 10 \
    --save_strategy steps \
    --save_total_limit 3 \
    --evaluation_strategy steps \
    --eval_steps 100 \
    --save_steps 200 \
    --gradient_accumulation_steps ${gradient_accumulation_steps} \
    --preprocessing_num_workers 8 \
    --max_seq_length ${max_seq_length} \
    --output_dir ${output_dir} \
    --overwrite_output_dir \
    --ddp_timeout 30000 \
    --logging_first_step True \
    --lora_rank ${lora_rank} \
    --lora_alpha ${lora_alpha} \
    --trainable ${lora_trainable} \
    --lora_dropout ${lora_dropout} \
    --modules_to_save ${modules_to_save} \
    --torch_dtype float16 \
    --validation_file ${validation_file} \
    --load_in_kbits 16 \
    --save_safetensors False \
    --gradient_checkpointing \
    --ddp_find_unused_parameters False

依赖情况(代码类问题务必提供)

pip list | grep -E 'transformers|peft|torch|sentencepiece|bitsandbytes'
bitsandbytes              0.39.0
peft                      0.3.0.dev0
sentencepiece             0.1.99
simpletransformers        0.64.3
torch                     2.1.1+cu118
torchaudio                2.1.1+cu118
torchvision               0.16.1+cu118
transformers              4.32.0

运行日志或截图

[2023-12-01 19:53:32,060] torch.distributed.elastic.multiprocessing.redirects: [WARNING] NOTE: Redirects are currently not supported in Windows or MacOs.
[W socket.cpp:663] [c10d] The client socket has failed to connect to [kubernetes.docker.internal]:29500 (system error: 10049 - 在其上下文中,该请求的地址无效。).

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

 and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin D:\Software\anaconda3\envs\KG\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll
CUDA SETUP: CUDA runtime path found: D:\Software\anaconda3\envs\KG\bin\cudart64_110.dll
CUDA SETUP: Highest compute capability among GPUs detected: 8.9
CUDA SETUP: Detected CUDA version 118
CUDA SETUP: Loading binary D:\Software\anaconda3\envs\KG\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll...
[2023-12-01 19:53:35,209] [INFO] [comm.py:652:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
[W socket.cpp:663] [c10d] The client socket has failed to connect to [kubernetes.docker.internal]:29500 (system error: 10049 - 在其上下文中,该请求的地址无效。).
Traceback (most recent call last):
  File "run_clm_sft_with_peft.py", line 513, in <module>
    main()
  File "run_clm_sft_with_peft.py", line 261, in main
    model_args, data_args, training_args = parser.parse_args_into_dataclasses()
  File "D:\Software\anaconda3\envs\KG\lib\site-packages\transformers\hf_argparser.py", line 338, in parse_args_into_dataclasses
    obj = dtype(**inputs)
  File "<string>", line 125, in __init__
  File "D:\Software\anaconda3\envs\KG\lib\site-packages\transformers\training_args.py", line 1400, in __post_init__
    and (self.device.type != "cuda")
  File "D:\Software\anaconda3\envs\KG\lib\site-packages\transformers\training_args.py", line 1857, in device
    return self._setup_devices
  File "D:\Software\anaconda3\envs\KG\lib\site-packages\transformers\utils\generic.py", line 54, in __get__
    cached = self.fget(obj)
  File "D:\Software\anaconda3\envs\KG\lib\site-packages\transformers\training_args.py", line 1793, in _setup_devices
    self.distributed_state = PartialState(timeout=timedelta(seconds=self.ddp_timeout))
  File "D:\Software\anaconda3\envs\KG\lib\site-packages\accelerate\state.py", line 170, in __init__
    dist.init_distributed(dist_backend=self.backend, auto_mpi_discovery=False, **kwargs)
  File "D:\Software\anaconda3\envs\KG\lib\site-packages\deepspeed\comm\comm.py", line 656, in init_distributed
    cdb = TorchBackend(dist_backend, timeout, init_method, rank, world_size)
  File "D:\Software\anaconda3\envs\KG\lib\site-packages\deepspeed\comm\torch.py", line 36, in __init__
    self.init_process_group(backend, timeout, init_method, rank, world_size)
  File "D:\Software\anaconda3\envs\KG\lib\site-packages\deepspeed\comm\torch.py", line 40, in init_process_group
    torch.distributed.init_process_group(backend,
  File "D:\Software\anaconda3\envs\KG\lib\site-packages\torch\distributed\c10d_logger.py", line 74, in wrapper
    func_return = func(*args, **kwargs)
  File "D:\Software\anaconda3\envs\KG\lib\site-packages\torch\distributed\distributed_c10d.py", line 1148, in init_process_group
    default_pg, _ = _new_process_group_helper(
  File "D:\Software\anaconda3\envs\KG\lib\site-packages\torch\distributed\distributed_c10d.py", line 1268, in _new_process_group_helper
    raise RuntimeError("Distributed package doesn't have NCCL built in")
RuntimeError: Distributed package doesn't have NCCL built in
[2023-12-01 19:53:37,082] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 18104) of binary: D:\Software\anaconda3\envs\KG\python.exe
Traceback (most recent call last):
  File "D:\Software\anaconda3\envs\KG\lib\runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "D:\Software\anaconda3\envs\KG\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "D:\Software\anaconda3\envs\KG\Scripts\torchrun.exe\__main__.py", line 7, in <module>
  File "D:\Software\anaconda3\envs\KG\lib\site-packages\torch\distributed\elastic\multiprocessing\errors\__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "D:\Software\anaconda3\envs\KG\lib\site-packages\torch\distributed\run.py", line 806, in main
    run(args)
  File "D:\Software\anaconda3\envs\KG\lib\site-packages\torch\distributed\run.py", line 797, in run
    elastic_launch(
  File "D:\Software\anaconda3\envs\KG\lib\site-packages\torch\distributed\launcher\api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "D:\Software\anaconda3\envs\KG\lib\site-packages\torch\distributed\launcher\api.py", line 264, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
run_clm_sft_with_peft.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-12-01_19:53:37
  host      : CCzzzzz
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 18104)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
iMountTai commented 9 months ago

应该是windows系统的问题。因为你只有一张卡,可以试试将torchrun启动改为python启动,如下 python run_clm_sft_with_peft.py

CCzzzzzzz commented 9 months ago

应该是windows系统的问题。因为你只有一张卡,可以试试将torchrun启动改为python启动,如下 python run_clm_sft_with_peft.py

将脚本修改为python run_clm_sft_with_peft.py运行后出现以下错误: Traceback (most recent call last): File "run_clm_sft_with_peft.py", line 513, in main() File "run_clm_sft_with_peft.py", line 353, in main train_dataset = build_instruction_dataset( File "D:\pyProjects2\Chinese-LLaMA-Alpaca-2\scripts\training\build_dataset.py", line 71, in build_instruction_dataset raw_dataset = load_dataset("json", data_files=file, cache_dir=cache_path) File "D:\Software\anaconda3\envs\KG\lib\site-packages\datasets\load.py", line 2109, in load_dataset builder_instance = load_dataset_builder( File "D:\Software\anaconda3\envs\KG\lib\site-packages\datasets\load.py", line 1795, in load_dataset_builder dataset_module = dataset_module_factory( File "D:\Software\anaconda3\envs\KG\lib\site-packages\datasets\load.py", line 1404, in dataset_module_factory return PackagedDatasetModuleFactory( File "D:\Software\anaconda3\envs\KG\lib\site-packages\datasets\load.py", line 947, in get_module data_files = DataFilesDict.from_patterns( File "D:\Software\anaconda3\envs\KG\lib\site-packages\datasets\data_files.py", line 671, in from_patterns DataFilesList.from_patterns( File "D:\Software\anaconda3\envs\KG\lib\site-packages\datasets\data_files.py", line 577, in from_patterns resolve_pattern( File "D:\Software\anaconda3\envs\KG\lib\site-packages\datasets\data_files.py", line 335, in resolve_pattern protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else "" TypeError: can only concatenate tuple (not "str") to tuple

iMountTai commented 9 months ago

你的dataset_dir下数据是什么格式的,有样例吗?

CCzzzzzzz commented 9 months ago

你的dataset_dir下数据是什么格式的,有样例吗?

D:\pyProjects2\Retrieve-Rewrite-Answer\finetune-llama\ZJQA\train下有一个train.json文件,数据样例如下: [ { "instruction": "请将下列三元组转化为一句或多句话:(男女两项全能赛, 地点, 绍兴柯桥羊山攀岩中心)。", "input": "", "output": "男女两项全能赛将在绍兴柯桥羊山攀岩中心举行。" }, { "instruction": "请将下列三元组转化为一句或多句话:(许翔宇, 出生地, 太原)。", "input": "", "output": "许翔宇的出生地是太原。" } ..... ]

tmpuserx commented 9 months ago

问题解决了吗?我是用LLaMA-Factory调其他的模型也碰到同样的错误。。

CCzzzzzzz commented 9 months ago

问题解决了吗?我是用LLaMA-Factory调其他的模型也碰到同样的错误。。

我用LLaMA-Factory没问题

tmpuserx commented 9 months ago

额,你的datasets是什么版本的?

leo19850812 commented 8 months ago

datasets我的版本是2.14.0,也遇到同样的问题

tmpuserx commented 8 months ago

datasets我的版本是2.14.0,也遇到同样的问题

我升级datasets后就解决了,新版本是2.15.0,但是貌似2.14.7及之后的都可以。

github-actions[bot] commented 8 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.