THUDM / ChatGLM-6B

ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Apache License 2.0
40.03k stars 5.16k forks source link

[BUG/Help] <title>有两块3090 微调charglm-6b,batch 1 ,fp16怎么还是报现存不足呢 #1366

Open cqray1990 opened 11 months ago

cqray1990 commented 11 months ago

Is there an existing issue for this?

Current Behavior

有两块3090 微调,batch 1 ,fp16怎么还是报现存不足呢

Expected Behavior

No response

Steps To Reproduce

deepspeed --num_gpus=2 --master_port $MASTER_PORT main.py \ --deepspeed deepspeed.json \ --do_train \ --train_file AdvertiseGen/train.json \ --test_file AdvertiseGen/dev.json \ --prompt_column content \ --response_column summary \ --overwrite_cache \ --model_name_or_path THUDM/chatglm-6b \ --output_dir ./output/adgen-chatglm-6b-ft-$LR \ --overwrite_output_dir \ --max_source_length 64 \ --max_target_length 64 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 1 \ --predict_with_generate \ --max_steps 5000 \ --logging_steps 10 \ --save_steps 1000 \ --learning_rate $LR \ --fp16

有两块3090 微调,batch 1 ,fp16怎么还是报现存不足呢

Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :

deepspeed 0.10
pyttorch 1.13.1
cudda 11.6

Anything else?

No response

atri2549 commented 11 months ago

8卡 V100-32G batch_size=1 也会显存不足(