shibing624 / MedicalGPT

MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现了包括增量预训练(PT)、有监督微调(SFT)、RLHF、DPO、ORPO。
Apache License 2.0
3.34k stars 500 forks source link

单机多卡环境,torchrun进行sft时报错RuntimeError: value cannot be converted to type int without overflow #222

Closed l1905 closed 10 months ago

l1905 commented 1 year ago

Describe the bug

使用CUDA_VISIBLE_DEVICES=1,2,3,4,5 python supervised_finetuning.py \ 方式启动sft不会报错, 但是使用torchrun模式,提示 value cannot be converted to type int without overflow, 在两周之前,运行时没有报错, 这次是重新conda开启了新环境, 拉的新代码,出现了类似报错, 在晚上并没有找到类似的报错原因, 感觉像是依赖包更新到足迹新版导致的 启动命令:

CUDA_VISIBLE_DEVICES=1,2,3,4,5 torchrun --nproc_per_node 5 supervised_finetuning.py \
    --model_type baichuan \
    --model_name_or_path /data/litong/base_model/Baichuan2-7B-Chat \
    --train_file_dir /data/litong/lt_data/finetune \
    --validation_file_dir /data/litong/lt_data/finetune \
    --template_name baichuan-chat \
    --per_device_train_batch_size 1 \
    --per_device_eval_batch_size 1 \
    --do_train \
    --do_eval \
    --use_peft True \
    --fp16 \
    --max_train_samples 20 \
    --max_eval_samples 10 \
    --num_train_epochs 1 \
    --learning_rate 2e-5 \
    --warmup_ratio 0.05 \
    --weight_decay 0.05 \
    --logging_strategy steps \
    --logging_steps 10 \
    --eval_steps 50 \
    --evaluation_strategy steps \
    --save_steps 500 \
    --save_strategy steps \
    --save_total_limit 3 \
    --gradient_accumulation_steps 1 \
    --preprocessing_num_workers 4 \
    --output_dir outputs-sft-bloom-v1 \
    --overwrite_output_dir \
    --ddp_timeout 30000 \
    --logging_first_step True \
    --target_modules all \
    --lora_rank 8 \
    --lora_alpha 16 \
    --lora_dropout 0.05 \
    --torch_dtype float16 \
    --device_map auto \
    --report_to tensorboard \
    --ddp_find_unused_parameters False \
    --gradient_checkpointing True \
    --cache_dir ./cache

完整输出报错信息:


trainable params: 17,891,328 || all params: 7,523,864,576 || trainable%: 0.23779439168895536
2023-09-26 10:12:45.415 | INFO     | __main__:main:906 - *** Train ***
trainable params: 17,891,328 || all params: 7,523,864,576 || trainable%: 0.23779439168895536
2023-09-26 10:12:45.829 | INFO     | __main__:main:906 - *** Train ***
Traceback (most recent call last):
  File "/data/litong/MedicalGPT/supervised_finetuning.py", line 954, in <module>
    main()
  File "/data/litong/MedicalGPT/supervised_finetuning.py", line 918, in main
    train_result = trainer.train(resume_from_checkpoint=checkpoint)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/transformers/trainer.py", line 1553, in train
    return inner_training_loop(
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/transformers/trainer.py", line 1682, in _inner_training_loop
    model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/accelerate/accelerator.py", line 1284, in prepare
    result = tuple(
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/accelerate/accelerator.py", line 1285, in <genexpr>
    self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/accelerate/accelerator.py", line 1090, in _prepare_one
    return self.prepare_model(obj, device_placement=device_placement)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/accelerate/accelerator.py", line 1429, in prepare_model
    model = torch.nn.parallel.DistributedDataParallel(
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 674, in __init__
    _verify_param_shape_across_processes(self.process_group, parameters)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/torch/distributed/utils.py", line 118, in _verify_param_shape_across_processes
    return dist._verify_params_across_processes(process_group, tensors, logger)
RuntimeError: value cannot be converted to type int without overflow
trainable params: 17,891,328 || all params: 7,523,864,576 || trainable%: 0.23779439168895536
2023-09-26 10:12:46.273 | INFO     | __main__:main:906 - *** Train ***
Traceback (most recent call last):
  File "/data/litong/MedicalGPT/supervised_finetuning.py", line 954, in <module>
Traceback (most recent call last):
  File "/data/litong/MedicalGPT/supervised_finetuning.py", line 954, in <module>
    main()
  File "/data/litong/MedicalGPT/supervised_finetuning.py", line 918, in main
    main()
  File "/data/litong/MedicalGPT/supervised_finetuning.py", line 918, in main
    train_result = trainer.train(resume_from_checkpoint=checkpoint)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/transformers/trainer.py", line 1553, in train
Traceback (most recent call last):
  File "/data/litong/MedicalGPT/supervised_finetuning.py", line 954, in <module>
    train_result = trainer.train(resume_from_checkpoint=checkpoint)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/transformers/trainer.py", line 1553, in train
    return inner_training_loop(
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/transformers/trainer.py", line 1682, in _inner_training_loop
    main()
  File "/data/litong/MedicalGPT/supervised_finetuning.py", line 918, in main
    return inner_training_loop(
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/transformers/trainer.py", line 1682, in _inner_training_loop
    train_result = trainer.train(resume_from_checkpoint=checkpoint)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/transformers/trainer.py", line 1553, in train
    model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/accelerate/accelerator.py", line 1284, in prepare
    model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/accelerate/accelerator.py", line 1284, in prepare
    result = tuple(
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/accelerate/accelerator.py", line 1285, in <genexpr>
    return inner_training_loop(
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/transformers/trainer.py", line 1682, in _inner_training_loop
    result = tuple(
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/accelerate/accelerator.py", line 1285, in <genexpr>
    self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/accelerate/accelerator.py", line 1090, in _prepare_one
    model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/accelerate/accelerator.py", line 1284, in prepare
    return self.prepare_model(obj, device_placement=device_placement)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/accelerate/accelerator.py", line 1429, in prepare_model
    self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/accelerate/accelerator.py", line 1090, in _prepare_one
    return self.prepare_model(obj, device_placement=device_placement)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/accelerate/accelerator.py", line 1429, in prepare_model
    result = tuple(
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/accelerate/accelerator.py", line 1285, in <genexpr>
    model = torch.nn.parallel.DistributedDataParallel(
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 674, in __init__
    _verify_param_shape_across_processes(self.process_group, parameters)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/torch/distributed/utils.py", line 118, in _verify_param_shape_across_processes
    return dist._verify_params_across_processes(process_group, tensors, logger)
RuntimeError: value cannot be converted to type int without overflow
    self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/accelerate/accelerator.py", line 1090, in _prepare_one
    model = torch.nn.parallel.DistributedDataParallel(
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 674, in __init__
    _verify_param_shape_across_processes(self.process_group, parameters)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/torch/distributed/utils.py", line 118, in _verify_param_shape_across_processes
    return self.prepare_model(obj, device_placement=device_placement)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/accelerate/accelerator.py", line 1429, in prepare_model
    return dist._verify_params_across_processes(process_group, tensors, logger)
RuntimeError: value cannot be converted to type int without overflow
    model = torch.nn.parallel.DistributedDataParallel(
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 674, in __init__
    _verify_param_shape_across_processes(self.process_group, parameters)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/torch/distributed/utils.py", line 118, in _verify_param_shape_across_processes
    return dist._verify_params_across_processes(process_group, tensors, logger)
RuntimeError: value cannot be converted to type int without overflow
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 29854 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 29855) of binary: /data/miniconda3/envs/env_litong_med/bin/python
Traceback (most recent call last):
  File "/data/miniconda3/envs/env_litong_med/bin/torchrun", line 8, in <module>
    sys.exit(main())
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main
    run(args)
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
    elastic_launch(
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/data/miniconda3/envs/env_litong_med/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
supervised_finetuning.py FAILED
------------------------------------------------------------
Failures:
[1]:
  time      : 2023-09-26_10:12:49
  host      : VM-16-15-centos
  rank      : 2 (local_rank: 2)
  exitcode  : 1 (pid: 29856)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
  time      : 2023-09-26_10:12:49
  host      : VM-16-15-centos
  rank      : 3 (local_rank: 3)
  exitcode  : 1 (pid: 29857)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
  time      : 2023-09-26_10:12:49
  host      : VM-16-15-centos
  rank      : 4 (local_rank: 4)
  exitcode  : 1 (pid: 29858)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-09-26_10:12:49
  host      : VM-16-15-centos
  rank      : 1 (local_rank: 1)
  exitcode  : 1 (pid: 29855)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================

依赖包信息

absl-py==2.0.0
accelerate==0.23.0
aiohttp==3.8.5
aiosignal==1.3.1
async-timeout==4.0.3
attrs==23.1.0
bitsandbytes==0.41.1
cachetools==5.3.1
certifi==2023.7.22
charset-normalizer==3.2.0
cmake==3.27.5
datasets==2.14.5
deepspeed==0.10.3
dill==0.3.7
docopt==0.6.2
filelock==3.12.4
frozenlist==1.4.0
fsspec==2023.6.0
google-auth==2.23.0
google-auth-oauthlib==1.0.0
grpcio==1.58.0
hjson==3.1.0
huggingface-hub==0.17.2
idna==3.4
Jinja2==3.1.2
lit==16.0.6
loguru==0.7.2
Markdown==3.4.4
MarkupSafe==2.1.3
mpmath==1.3.0
multidict==6.0.4
multiprocess==0.70.15
networkx==3.1
ninja==1.11.1
numpy==1.26.0
nvidia-cublas-cu11==11.10.3.66
nvidia-cuda-cupti-cu11==11.7.101
nvidia-cuda-nvrtc-cu11==11.7.99
nvidia-cuda-runtime-cu11==11.7.99
nvidia-cudnn-cu11==8.5.0.96
nvidia-cufft-cu11==10.9.0.58
nvidia-curand-cu11==10.2.10.91
nvidia-cusolver-cu11==11.4.0.1
nvidia-cusparse-cu11==11.7.4.91
nvidia-nccl-cu11==2.14.3
nvidia-nvtx-cu11==11.7.91
oauthlib==3.2.2
packaging==23.1
pandas==2.1.1
peft==0.5.0
pipreqs==0.4.13
protobuf==4.24.3
psutil==5.9.5
py-cpuinfo==9.0.0
pyarrow==13.0.0
pyasn1==0.5.0
pyasn1-modules==0.3.0
pydantic==1.10.12
python-dateutil==2.8.2
pytz==2023.3.post1
PyYAML==6.0.1
regex==2023.8.8
requests==2.31.0
requests-oauthlib==1.3.1
rsa==4.9
safetensors==0.3.3
scipy==1.11.2
sentencepiece==0.1.99
six==1.16.0
sympy==1.12
tensorboard==2.14.0
tensorboard-data-server==0.7.1
tokenizers==0.13.3
torch==2.0.1
tqdm==4.66.1
transformers==4.33.2
triton==2.0.0
trl==0.7.1
typing_extensions==4.8.0
tzdata==2023.3
urllib3==1.26.16
Werkzeug==2.3.7
xformers==0.0.21
xxhash==3.3.0
yarg==0.1.9
yarl==1.9.2

python版本

Python 3.10.0 (default, Mar  3 2022, 09:58:08) [GCC 7.5.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>

环境信息

单卡40G显存

在sft baichuan2-7b, baichuan2-13b都有复现

shibing624 commented 1 year ago

环境问题不太清楚,可能处理: 1.transformers降级版本;

  1. xformers卸载;
  2. bitsandbytes卸载
xxcoco763 commented 1 year ago

同样的情况,请问解决了吗?

murray-z commented 1 year ago

同样的情况,请问解决了吗?

tjb-tech commented 1 year ago

value cannot be converted to type int without overflow

请问解决了嘛,遇到了相同的问题

shibing624 commented 1 year ago

百川模型用transformers==4.33.2

tjb-tech commented 1 year ago

百川模型用transformers==4.33.2

我是在微调chatglm3-6b的时候出现的问题,而且transformers==4.33.2,请问还有可能是其他问题嘛

shibing624 commented 1 year ago

啥问题?我今天跑通chatglm3-6b多卡的。

tjb-tech commented 1 year ago

啥问题?我今天跑通chatglm3-6b多卡的。 用torchrun模式两卡训练chatglm3-6b的时候就出现了以下错误:

File "supervised_finetuning.py", line 1325, in <module>
main()
File "supervised_finetuning.py", line 1286, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/root/paddlejob/workspace/env_run/llm/yes/envs/medicalGPT/lib/python3.8/site-packages/transformers/trainer.py", line 1553, in train
return inner_training_loop(
File "/root/paddlejob/workspace/env_run/llm/yes/envs/medicalGPT/lib/python3.8/site-packages/transformers/trainer.py", line 1682, in _inner_training_loop
model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)
File "/root/paddlejob/workspace/env_run/llm/yes/envs/medicalGPT/lib/python3.8/site-packages/accelerate/accelerator.py", line 1202, in prepare
result = tuple(
File "/root/paddlejob/workspace/env_run/llm/yes/envs/medicalGPT/lib/python3.8/site-packages/accelerate/accelerator.py", line 1203, in <genexpr>
self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
File "/root/paddlejob/workspace/env_run/llm/yes/envs/medicalGPT/lib/python3.8/site-packages/accelerate/accelerator.py", line 1030, in _prepare_one
return self.prepare_model(obj, device_placement=device_placement)
File "/root/paddlejob/workspace/env_run/llm/yes/envs/medicalGPT/lib/python3.8/site-packages/accelerate/accelerator.py", line 1340, in prepare_model
model = torch.nn.parallel.DistributedDataParallel(
File "/root/paddlejob/workspace/env_run/llm/yes/envs/medicalGPT/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 655, in __init__
_verify_param_shape_across_processes(self.process_group, parameters)
File "/root/paddlejob/workspace/env_run/llm/yes/envs/medicalGPT/lib/python3.8/site-packages/torch/distributed/utils.py", line 112, in _verify_param_shape_across_processes
return dist._verify_params_across_processes(process_group, tensors, logger)
RuntimeError: value cannot be converted to type int without overflow

其中transformers==4.33.2,而且已经卸载了xformers和bitsandbytes。请问还有哪里可能有问题嘛

tjb-tech commented 1 year ago

啥问题?我今天跑通chatglm3-6b多卡的。

请问您能分享一下您成功训练的scripts嘛,我的scripts是:

cd /root/paddlejob/workspace/env_run/llm/MedicalGPT
CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node 2 supervised_finetuning.py \
    --model_type chatglm \
    --model_name_or_path /root/paddlejob/workspace/env_run/llm/chatglm3-6b \
    --train_file_dir ./data/finetune \
    --validation_file_dir ./data/finetune \
    --per_device_train_batch_size 2 \
    --per_device_eval_batch_size 2 \
    --do_train \
    --do_eval \
    --use_peft True \
    --fp16 \
    --max_train_samples 1000 \
    --max_eval_samples 10 \
    --num_train_epochs 3 \
    --learning_rate 2e-5 \
    --warmup_ratio 0.05 \
    --weight_decay 0.05 \
    --logging_strategy steps \
    --logging_steps 10 \
    --eval_steps 50 \
    --evaluation_strategy steps \
    --save_steps 500 \
    --save_strategy steps \
    --save_total_limit 3 \
    --gradient_accumulation_steps 1 \
    --preprocessing_num_workers 4 \
    --output_dir outputs-sft-chatglm3-genrec \
    --overwrite_output_dir \
    --ddp_timeout 30000 \
    --logging_first_step True \
    --target_modules all \
    --lora_rank 8 \
    --lora_alpha 16 \
    --lora_dropout 0.05 \
    --torch_dtype float16 \
    --device_map auto \
    --report_to tensorboard \
    --ddp_find_unused_parameters False \
    --gradient_checkpointing True \
    --cache_dir ./cache

而且我的base model换成vicuna-v1.5后也发生了同样的错误。

tjb-tech commented 1 year ago

啥问题?我今天跑通chatglm3-6b多卡的。

请问您能分享一下您成功训练的scripts嘛,我的scripts是:

cd /root/paddlejob/workspace/env_run/llm/MedicalGPT
CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node 2 supervised_finetuning.py \
    --model_type chatglm \
    --model_name_or_path /root/paddlejob/workspace/env_run/llm/chatglm3-6b \
    --train_file_dir ./data/finetune \
    --validation_file_dir ./data/finetune \
    --per_device_train_batch_size 2 \
    --per_device_eval_batch_size 2 \
    --do_train \
    --do_eval \
    --use_peft True \
    --fp16 \
    --max_train_samples 1000 \
    --max_eval_samples 10 \
    --num_train_epochs 3 \
    --learning_rate 2e-5 \
    --warmup_ratio 0.05 \
    --weight_decay 0.05 \
    --logging_strategy steps \
    --logging_steps 10 \
    --eval_steps 50 \
    --evaluation_strategy steps \
    --save_steps 500 \
    --save_strategy steps \
    --save_total_limit 3 \
    --gradient_accumulation_steps 1 \
    --preprocessing_num_workers 4 \
    --output_dir outputs-sft-chatglm3-genrec \
    --overwrite_output_dir \
    --ddp_timeout 30000 \
    --logging_first_step True \
    --target_modules all \
    --lora_rank 8 \
    --lora_alpha 16 \
    --lora_dropout 0.05 \
    --torch_dtype float16 \
    --device_map auto \
    --report_to tensorboard \
    --ddp_find_unused_parameters False \
    --gradient_checkpointing True \
    --cache_dir ./cache

而且我的base model换成vicuna-v1.5后也发生了同样的错误。

已经解决了,将accelerator更新到最新版本就行