QwenLM / Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Apache License 2.0
13.59k stars 1.11k forks source link

[BUG] <title>lora微调14b-chat可以用deepspeed zreo 3 吗?显卡是8 x V100(16G),不管是zero2还是zero3 都存在OOM #1030

Closed yyyzhao closed 6 months ago

yyyzhao commented 8 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

No response

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

jklj077 commented 8 months ago

16GB的8卡没测过,24GB的4卡是可以的。参见:https://github.com/QwenLM/Qwen/blob/main/recipes/finetune/deepspeed/readme.md#settings-and-gpu-requirements

LittleYouEr commented 8 months ago

同样的问题,8卡V100,全量+Zero3+Offload-CPU,还是出OOM。扩充到两台2*8卡V100,同样的任务还是出OOM。

WangJianQ-cmd commented 7 months ago

同样的问题,8卡V100,全量+Zero3+Offload-CPU,还是出OOM。扩充到两台2*8卡V100,同样的任务还是出OOM。

请问您是如何扩充到两台微调的呀,设置的参数可以借鉴一下吗?