THUDM / ChatGLM-6B

ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Apache License 2.0
40.44k stars 5.19k forks source link

修改deepseed之后,全参训练报错 #1197

Open liuzhipengchd opened 1 year ago

liuzhipengchd commented 1 year ago

Is there an existing issue for this?

Current Behavior

image

修改之后的deepspeed配置 ‘’‘

{ "train_micro_batch_size_per_gpu": "auto", "zero_allow_untested_optimizer": true, "fp16": { "enabled": "auto", "loss_scale": 0, "initial_scale_power": 16, "loss_scale_window": 1000, "hysteresis": 2, "min_loss_scale": 1 }, "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 5e8, "overlap_comm": false, "reduce_scatter": true, "reduce_bucket_size": 5e8, "contiguous_gradients" : true } }

’‘’

Expected Behavior

No response

Steps To Reproduce

wu

Environment

deepspeed                0.8.3
torch                    1.13.0

Anything else?

No response

liuzhipengchd commented 1 year ago

@duzx16 大佬可以帮忙看看吗。

liuzhipengchd commented 1 year ago

@Youggls 大佬,开启了offload 报错 可以看下吗

fallfo commented 1 year ago

deepspeed.json里加一条 "zero_force_ds_cpu_optimizer": false

liuzhipengchd commented 1 year ago

deepspeed.json里加一条 "zero_force_ds_cpu_optimizer": false

大佬,加了这个参数 ,还是报错 CPU Virtual Memory: used = 597.02 GB, percent = 59.3% [2023-06-08 20:35:56,134] [INFO] [launch.py:314:sigkill_handler] Killing subprocess 223933 [2023-06-08 20:35:59,137] [INFO] [launch.py:314:sigkill_handler] Killing subprocess 223934 [2023-06-08 20:35:59,138] [ERROR] [launch.py:320:sigkill_handler] ['/usr/local/lib/miniconda3/envs/cloud-ai-lab/bin/python', '-u', 'ft_main.py', '--local_rank=1', '--deepspeed', 'deepspeed.json', '--do_train', '--train_file', '/dev/shm/data/train.json', '--test_file', '/dev/shm/data/dev.json', '--prompt_column', 'prompt', '--response_column', 'answer', '--history_column', 'history', '--overwrite_cache', '--model_name_or_path',

如果不加"offload_optimizer": { "device": "cpu", "pin_memory": true }

直接显存oom。