PaddlePaddle / PaddleNLP

👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
https://paddlenlp.readthedocs.io
Apache License 2.0
12.05k stars 2.93k forks source link

【Bug】chatglm 训练,进行一次save之后,程序就报错了 #6131

Closed liuzhipengchd closed 7 months ago

liuzhipengchd commented 1 year ago

软件环境

paddlepaddle-gpu         0.0.0.post112
paddlenlp: 拉的最新版本

重复问题

错误描述

100%|██████████| 461/461 [01:42<00:00,  9.41it/s][2023-06-08 17:35:08,268] [    INFO] - Saving model checkpoint to ./checkpoints/chatglm-6b/checkpoint-10
[2023-06-08 17:35:08,281] [    INFO] - Configuration saved in ./checkpoints/chatglm-6b/checkpoint-10/config.json
[2023-06-08 17:35:21,933] [    INFO] - tokenizer config file saved in ./checkpoints/chatglm-6b/checkpoint-10/tokenizer_config.json
[2023-06-08 17:35:21,935] [    INFO] - Special tokens file saved in ./checkpoints/chatglm-6b/checkpoint-10/special_tokens_map.json
LAUNCH INFO 2023-06-08 17:35:48,689 Pod failed
LAUNCH ERROR 2023-06-08 17:35:48,689 Container failed !!!
Container rank 1 status failed cmd ['/usr/local/lib/miniconda3/envs/cloud-ai-lab/bin/python', '-u', 'finetune_generation.py', '--model_name_or_path', '/mnt/afs/winning/oro/lzp/chatglm', '--task_name_or_path', '/dev/shm/data', '--max_steps', '3000', '--learning_rate', '3e-4', '--warmup_steps', '20', '--eval_steps', '5', '--logging_steps', '1', '--save_steps', '10', '--save_total_limit', '1', '--output_dir', './checkpoints/chatglm-6b', '--src_length', '700', '--tgt_length', '300', '--per_device_eval_batch_size', '8', '--per_device_train_batch_size', '8', '--gradient_accumulation_steps', '1', '--fp16', '--fp16_opt_level', 'O2', '--recompute', 'True', '--do_train', '--do_eval', '--load_best_model_at_end', 'True', '--tensor_parallel_degree', '2'] code -9 log log/workerlog.1 
env {'SHELL': '/bin/bash', 'NV_LIBCUBLAS_VERSION': '11.5.1.109-1', 'NVIDIA_VISIBLE_DEVICES': 'GPU-fd92cd9a-2ff3-be6f-983f-df908f1a5167,GPU-a59c937f-9eb2-d601-daa0-d940f837eeca', 'KUBERNETES_SERVICE_PORT_HTTPS': '443', 'COLORTERM': 'truecolor', 'NV_NVML_DEV_VERSION': '11.3.58-1', 'NV_CUDNN_PACKAGE_NAME': 'libcudnn8', 'KUBERNETES_SERVICE_PORT': '443', 'TERM_PROGRAM_VERSION': '1.70.1', 'NV_LIBNCCL_DEV_PACKAGE': 'libnccl-dev=2.9.9-1+cuda11.3', 'CONDA_EXE': '/usr/local/lib/miniconda3/bin/conda', '_CE_M': '', 'NV_LIBNCCL_DEV_PACKAGE_VERSION': '2.9.9-1', 'CONTAINER': 'devmachine', 'HOSTNAME': 'ccec1896-ae7d-4048-b214-09cc405dd07e', 'PYTHON_VERSION': '3.8', 'NVIDIA_REQUIRE_CUDA': 'cuda>=11.3 brand=tesla,driver>=418,driver<419 driver>=450', 'NV_LIBCUBLAS_DEV_PACKAGE': 'libcublas-dev-11-3=11.5.1.109-1', 'NV_NVTX_VERSION': '11.3.109-1', 'AILAB_ZONE': 'cn-sh-01a', 'NV_CUDA_CUDART_DEV_VERSION': '11.3.109-1', 'NV_LIBCUSPARSE_VERSION': '11.6.0.109-1', 'NV_LIBNPP_VERSION': '11.3.3.95-1', 'TORCH_VISION_VERSION': '0.13.1', 'NCCL_VERSION': '2.9.9-1', 'VSCODE_PROXY_URI': 'https://vscode-ccec1896-ae7d-4048-b214-09cc405dd07e.aicl-proxy.cn-sh-01.sensecore.cn:33080/proxy/{{port}}', 'MMCV_FULL_VERSION': '', 'PWD': '/mnt/afs/winning/oro/lzp/PaddleNLP/examples/language_model/chatglm', 'VSCODE_PORT': '8000', 'TENSORFLOW_VERSION': '', 'CONDA_PREFIX': '/usr/local/lib/miniconda3/envs/cloud-ai-lab', 'NV_CUDNN_PACKAGE': 'libcudnn8=8.2.0.53-1+cuda11.3', 'NAMESPACE': 'defaultcn-sh-01aaicl', 'NVIDIA_DRIVER_CAPABILITIES': 'compute,utility', 'NV_NVPROF_DEV_PACKAGE': 'cuda-nvprof-11-3=11.3.111-1', 'NV_LIBNPP_PACKAGE': 'libnpp-11-3=11.3.3.95-1', 'NV_LIBNCCL_DEV_PACKAGE_NAME': 'libnccl-dev', 'TZ': 'Asia/Shanghai', 'VSCODE_GIT_ASKPASS_NODE': '/devmachine/app/vscode/lib/node', 'NV_LIBCUBLAS_DEV_VERSION': '11.5.1.109-1', 'NV_LIBCUBLAS_DEV_PACKAGE_NAME': 'libcublas-dev-11-3', 'NV_CUDA_CUDART_VERSION': '11.3.109-1', 'HOME': '/root', 'LANG': 'en_US.utf-8', 'KUBERNETES_PORT_443_TCP': 'tcp://10.96.0.1:443', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:', 'DEV_USER': 'root:2000', 'CUDA_VERSION': '11.3.1', 'NV_LIBCUBLAS_PACKAGE': 'libcublas-11-3=11.5.1.109-1', 'ROOTABLE': 'true', 'DEV_GROUPS': '', 'CONDA_PROMPT_MODIFIER': '(cloud-ai-lab) ', 'GIT_ASKPASS': '/devmachine/app/vscode/lib/vscode/extensions/git/dist/askpass.sh', 'PYTHON_MAJOR_VERSION': '3', 'PYTHON_MINOR_VERSION': '8', 'TORCH_AUDIO_VERSION': '0.12.1', 'NV_LIBNPP_DEV_PACKAGE': 'libnpp-dev-11-3=11.3.3.95-1', 'MMPOSE_VERSION': '', 'NV_LIBCUBLAS_PACKAGE_NAME': 'libcublas-11-3', 'TELEPORT_ID': 'ccec1896-ae7d-4048-b214-09cc405dd07e', 'NV_LIBNPP_DEV_VERSION': '11.3.3.95-1', 'VSCODE_GIT_ASKPASS_EXTRA_ARGS': '', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'TERM': 'xterm-256color', 'NV_LIBCUSPARSE_DEV_VERSION': '11.6.0.109-1', 'MMDET_VERSION': '', '_CE_CONDA': '', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'MMDET3D_VERSION': '', 'LIBRARY_PATH': '/usr/local/cuda/lib64/stubs', 'NV_CUDNN_VERSION': '8.2.0.53', 'VSCODE_GIT_IPC_HANDLE': '/tmp/vscode-git-132677bda5.sock', 'CONDA_ENV_NAME': 'cloud-ai-lab', 'CONDA_SHLVL': '2', 'SHLVL': '1', 'TELEPORT_AUTH_SERVERS': 'aicl-proxy-internal.cn-sh-01.sensecore.cn:33080', 'CONDA_DIR': '/usr/local/lib/miniconda3', 'NV_CUDA_LIB_VERSION': '11.3.1-1', 'NVARCH': 'x86_64', 'KUBERNETES_PORT_443_TCP_PROTO': 'tcp', 'NV_CUDNN_PACKAGE_DEV': 'libcudnn8-dev=8.2.0.53-1+cuda11.3', 'KUBERNETES_PORT_443_TCP_ADDR': '10.96.0.1', 'NV_CUDA_COMPAT_PACKAGE': 'cuda-compat-11-3', 'PIP_ROOT_USER_ACTION': 'ignore', 'COMMAND': '', 'CONDA_PYTHON_EXE': '/usr/local/lib/miniconda3/bin/python', 'NV_LIBNCCL_PACKAGE': 'libnccl2=2.9.9-1+cuda11.3', 'LD_LIBRARY_PATH': '/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/cuda-11.3/targets/x86_64-linux/lib:/usr/local/cuda-11.3/targets/x86_64-linux/lib/stubs', 'SSHD_PASSWORD': 'D3gCWWU51nlKYGqC', 'MMCLS_VERSION': '', 'MMOCR_VERSION': '', 'CONDA_DEFAULT_ENV': 'cloud-ai-lab', 'AILAB_ENV': 'cn', 'MMSEGMENTATION_VERSION': '', 'AILAB_VOLUME_PATH': '6b29c105-f454-11ed-bb61-5edf96b0b28c:/mnt/afs', 'KUBERNETES_SERVICE_HOST': '10.96.0.1', 'NV_NVPROF_VERSION': '11.3.111-1', 'LC_ALL': 'en_US.UTF-8', 'JUPYTERLAB_PORT': '9000', 'KUBERNETES_PORT': 'tcp://10.96.0.1:443', 'KUBERNETES_PORT_443_TCP_PORT': '443', 'devmachine_id': 'defaultcn-sh-01aaicl-ccec1896-ae7d-4048-b214-09cc405dd07e', 'VSCODE_GIT_ASKPASS_MAIN': '/devmachine/app/vscode/lib/vscode/extensions/git/dist/askpass-main.js', 'PATH': '/root/.scc/bin:/usr/local/lib/miniconda3/envs/cloud-ai-lab/bin:/devmachine/app/vscode/bin:/devmachine/sbin:/devmachine/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/devmachine/app/jupyterlab/envs/jupyterlab/bin:/usr/local/lib/miniconda3/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin', 'MINI_CONDA_VERSION': 'Miniconda3-latest-Linux-x86_64.sh', 'frp_server_addr': 'cci-proxy-internal.cn-sh-01.sensecore.cn', 'NV_LIBNCCL_PACKAGE_NAME': 'libnccl2', 'NV_LIBNCCL_PACKAGE_VERSION': '2.9.9-1', 'RESOURCE_TYPE': 'compute.aicl.v1.instance', 'CONDA_PREFIX_1': '/usr/local/lib/miniconda3', 'AILAB_IMAGE': 'ubuntu20.04-py3.8-cuda11.3-cudnn8-torch1.12', 'OPENCV_PYTHON_VERSION': '', 'DEBIAN_FRONTEND': 'noninteractive', 'GIT_TERMINAL_PROMPT': '1', 'TORCH_VERSION': '1.12.1', 'OLDPWD': '/mnt/afs/winning/oro/lzp/PaddleNLP', 'TERM_PROGRAM': 'vscode', 'frp_server_port': '20001', 'VSCODE_IPC_HOOK_CLI': '/tmp/vscode-ipc-74af49da-81d8-40d6-8e02-e0dd318dbdbf.sock', '_': '/usr/local/lib/miniconda3/envs/cloud-ai-lab/bin/python', 'CUSTOM_DEVICE_ROOT': '', 'OMP_NUM_THREADS': '1', 'POD_NAME': 'mhkvzz', 'PADDLE_MASTER': '10.119.16.10:60278', 'PADDLE_GLOBAL_SIZE': '2', 'PADDLE_LOCAL_SIZE': '2', 'PADDLE_GLOBAL_RANK': '1', 'PADDLE_LOCAL_RANK': '1', 'PADDLE_NNODES': '1', 'PADDLE_TRAINER_ENDPOINTS': '10.119.16.10:60279,10.119.16.10:60280', 'PADDLE_CURRENT_ENDPOINT': '10.119.16.10:60280', 'PADDLE_TRAINER_ID': '1', 'PADDLE_TRAINERS_NUM': '2', 'PADDLE_RANK_IN_NODE': '1', 'FLAGS_selected_gpus': '1'}
LAUNCH INFO 2023-06-08 17:35:48,689 ------------------------- ERROR LOG DETAIL -------------------------
    INFO] tensor_parallel.py:32 - start broadcast mp parameters
[2023-06-08 17:31:24,522] [    INFO] tensor_parallel.py:39 - start broadcast dp parameters
[2023-06-08 17:31:24,649] [    INFO] tensor_parallel.py:42 - mp's parameters is ready
[2023-06-08 17:31:24,649] [ WARNING] hybrid_parallel_optimizer.py:261 - While using ClipGradByGlobalNorm in TensorParallel, PipelineParallel or Sharding, the grad clip of original optimizer will be changed.
[2023-06-08 17:31:24,649] [    INFO] - ***** Running training *****
[2023-06-08 17:31:24,649] [    INFO] -   Num examples = 71639
[2023-06-08 17:31:24,649] [    INFO] -   Num Epochs = 1
[2023-06-08 17:31:24,649] [    INFO] -   Instantaneous batch size per device = 8
[2023-06-08 17:31:24,650] [    INFO] -   Total train batch size (w. parallel, distributed & accumulation) = 8
[2023-06-08 17:31:24,650] [    INFO] -   Gradient Accumulation steps = 1
[2023-06-08 17:31:24,650] [    INFO] -   Total optimization steps = 3000
[2023-06-08 17:31:24,650] [    INFO] -   Total num train samples = 24000
[2023-06-08 17:31:24,653] [    INFO] -   Number of trainable parameters = 3086991360 (per device)
[2023-06-08 17:31:24,655] [    INFO] -   Number of trainable parameters = 6173982720 (all devices, roughly)
[2023-06-08 17:31:31,950] [ WARNING] - optimizer not run, scale_before: 32768.0, scale_after: 32768.0
Found inf or nan, current scale is: 32768.0, decrease to: 32768.0*0.5
[2023-06-08 17:31:32,860] [ WARNING] - optimizer not run, scale_before: 32768.0, scale_after: 16384.0
[2023-06-08 17:31:34,041] [ WARNING] - optimizer not run, scale_before: 16384.0, scale_after: 16384.0
Found inf or nan, current scale is: 16384.0, decrease to: 16384.0*0.5
[2023-06-08 17:31:35,185] [ WARNING] - optimizer not run, scale_before: 16384.0, scale_after: 8192.0
[2023-06-08 17:31:36,118] [    INFO] - ***** Running Evaluation *****
[2023-06-08 17:31:36,118] [    INFO] -   Num examples = 3685
[2023-06-08 17:31:36,119] [    INFO] -   Total prediction steps = 461
[2023-06-08 17:31:36,119] [    INFO] -   Pre device batch size = 8
[2023-06-08 17:31:36,119] [    INFO] -   Total Batch size = 8
[2023-06-08 17:33:21,813] [ WARNING] - optimizer not run, scale_before: 8192.0, scale_after: 8192.0
[2023-06-08 17:33:25,422] [    INFO] - ***** Running Evaluation *****
[2023-06-08 17:33:25,422] [    INFO] -   Num examples = 3685
[2023-06-08 17:33:25,423] [    INFO] -   Total prediction steps = 461
[2023-06-08 17:33:25,423] [    INFO] -   Pre device batch size = 8
[2023-06-08 17:33:25,423] [    INFO] -   Total Batch size = 8
[2023-06-08 17:35:08,268] [    INFO] - Saving model checkpoint to ./checkpoints/chatglm-6b/checkpoint-10
LAUNCH INFO 2023-06-08 17:35:50,808 Exit code -15

稳定复现步骤 & 代码

python -m paddle.distributed.launch --gpus "0,1" finetune_generation.py \
--model_name_or_path /mnt/afs/oro/chatglm \
--task_name_or_path /dev/shm/data \
--max_steps 3000 \
--learning_rate 3e-4 \
--warmup_steps 20 \
--eval_steps 5 \
--logging_steps 1 \
--save_steps 10 \
--save_total_limit 1 \
--output_dir ./checkpoints/chatglm-6b \
--src_length 700 \
--tgt_length 300 \
--per_device_eval_batch_size 8 \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 1 \
--fp16 \
--fp16_opt_level O2 \
--recompute True \
--do_train \
--do_eval \
--load_best_model_at_end True \
--tensor_parallel_degree 2
liuzhipengchd commented 1 year ago

如果加入--lora 的话就没有错。。是不是finetune,写入的文件太大了,耗时太长,导致训练的节点之间同步有问题了?

eval_loss: 5.4856109619140625, eval_accuracy: 0.23814019353316027, eval_runtime: 2.8708, eval_samples_per_second: 27.518, eval_steps_per_second: 3.483, eval_ppl: 241.19626072010593, epoch: 0.0056

  2%|▏         | 50/3000 [00:53<41:20,  1.19it/s]

100%|██████████| 10/10 [00:02<00:00,  4.00it/s][2023-06-08 20:52:02,089] [    INFO] - Saving model checkpoint to ./checkpoints/chatglm-6b/checkpoint-50                    
[2023-06-08 20:52:02,100] [    INFO] - Configuration saved in ./checkpoints/chatglm-6b/checkpoint-50/config.json
[2023-06-08 20:52:09,430] [    INFO] - tokenizer config file saved in ./checkpoints/chatglm-6b/checkpoint-50/tokenizer_config.json
[2023-06-08 20:52:09,430] [    INFO] - Special tokens file saved in ./checkpoints/chatglm-6b/checkpoint-50/special_tokens_map.json
LAUNCH INFO 2023-06-08 20:52:30,807 Exit code -9
LemonNoel commented 1 year ago

看下 log/workerlog.0 或者 log/workerlog.1 里边有详细报错吗

liuzhipengchd commented 1 year ago

看下 log/workerlog.0 或者 log/workerlog.1 里边有详细报错吗

日志已经没有了。最后采用模型只save一次。。checkpoint的存梯度的文件太大了,一个节点就48g。。是不是这个原因?导致文件流写入失败