OpenLLMAI / OpenRLHF

An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & Mixtral)
https://openrlhf.readthedocs.io/
Apache License 2.0
1.73k stars 164 forks source link

Workers (tasks / actors) killed due to memory pressure (OOM) #198

Closed LSC527 closed 5 months ago

LSC527 commented 5 months ago

跑examples/scripts/train_ppo_llama_ray_70b.sh出现内存OOM:

(ActorModelRayActor pid=21287, ip=xxx) Loading extension module cpu_adam...
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:02,886] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.13.0, git-hash=unknown, git-branch=unknown
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:02,886] [INFO] [comm.py:662:init_distributed] Distributed backend already initialized
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:02,906] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:02,908] [INFO] [logging.py:96:log_dist] [Rank 0] Using client Optimizer as basic optimizer
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:02,908] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:02,955] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = DeepSpeedCPUAdam
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:02,955] [INFO] [utils.py:56:is_zero_supported_optimizer] Checking ZeRO support for optimizer=DeepSpeedCPUAdam type=<class 'deepspeed.ops.adam.cpu_adam.DeepSpeedCPUAdam'>
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:02,955] [INFO] [logging.py:96:log_dist] [Rank 0] Creating fp16 ZeRO stage 3 optimizer, MiCS is enabled False, Hierarchical params gather False
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:02,955] [INFO] [logging.py:96:log_dist] [Rank 0] Creating torch.bfloat16 ZeRO stage 3 optimizer
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:03,086] [INFO] [utils.py:791:see_memory_usage] Stage 3 initialize beginning
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:03,087] [INFO] [utils.py:792:see_memory_usage] MA 32.8 GB         Max_MA 34.01 GB         CA 35.85 GB         Max_CA 36 GB
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:03,087] [INFO] [utils.py:799:see_memory_usage] CPU Virtual Memory:  used = 68.86 GB, percent = 9.1%
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:03,090] [INFO] [stage3.py:128:__init__] Reduce bucket size 500,000,000
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:03,090] [INFO] [stage3.py:129:__init__] Prefetch bucket size 50,000,000
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:03,228] [INFO] [utils.py:791:see_memory_usage] DeepSpeedZeRoOffload initialize [begin]
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:03,229] [INFO] [utils.py:792:see_memory_usage] MA 32.8 GB         Max_MA 32.8 GB         CA 35.85 GB         Max_CA 36 GB
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:03,229] [INFO] [utils.py:799:see_memory_usage] CPU Virtual Memory:  used = 68.86 GB, percent = 9.1%
(ActorModelRayActor pid=21287, ip=xxx) Parameter Offload: Total persistent parameters: 1318912 in 161 params
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:03,393] [INFO] [utils.py:791:see_memory_usage] DeepSpeedZeRoOffload initialize [end]
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:03,394] [INFO] [utils.py:792:see_memory_usage] MA 32.8 GB         Max_MA 32.8 GB         CA 35.85 GB         Max_CA 36 GB
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:03,395] [INFO] [utils.py:799:see_memory_usage] CPU Virtual Memory:  used = 68.87 GB, percent = 9.1%
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:03,526] [INFO] [utils.py:791:see_memory_usage] Before creating fp16 partitions
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:03,526] [INFO] [utils.py:792:see_memory_usage] MA 32.8 GB         Max_MA 32.8 GB         CA 35.85 GB         Max_CA 36 GB
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:03,527] [INFO] [utils.py:799:see_memory_usage] CPU Virtual Memory:  used = 68.86 GB, percent = 9.1%
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:38,153] [INFO] [utils.py:791:see_memory_usage] After creating fp16 partitions: 18
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:38,154] [INFO] [utils.py:792:see_memory_usage] MA 32.8 GB         Max_MA 32.8 GB         CA 54.57 GB         Max_CA 55 GB
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:38,154] [INFO] [utils.py:799:see_memory_usage] CPU Virtual Memory:  used = 109.55 GB, percent = 14.5%
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:38,295] [INFO] [utils.py:791:see_memory_usage] Before creating fp32 partitions
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:38,296] [INFO] [utils.py:792:see_memory_usage] MA 32.8 GB         Max_MA 32.8 GB         CA 54.57 GB         Max_CA 55 GB
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:07:38,297] [INFO] [utils.py:799:see_memory_usage] CPU Virtual Memory:  used = 109.36 GB, percent = 14.5%
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:08:43,344] [INFO] [utils.py:791:see_memory_usage] After creating fp32 partitions
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:08:43,345] [INFO] [utils.py:792:see_memory_usage] MA 32.8 GB         Max_MA 32.8 GB         CA 54.57 GB         Max_CA 55 GB
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:08:43,345] [INFO] [utils.py:799:see_memory_usage] CPU Virtual Memory:  used = 320.56 GB, percent = 42.4%
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:08:49,514] [INFO] [utils.py:791:see_memory_usage] Before initializing optimizer states
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:08:49,515] [INFO] [utils.py:792:see_memory_usage] MA 32.8 GB         Max_MA 32.8 GB         CA 54.57 GB         Max_CA 55 GB
(ActorModelRayActor pid=21287, ip=xxx) [2024-01-24 13:08:49,515] [INFO] [utils.py:799:see_memory_usage] CPU Virtual Memory:  used = 328.24 GB, percent = 43.5%
(raylet, ip=xxx) [2024-01-24 13:10:36,893 E 21071 21071] (raylet) node_manager.cc:3035: 7 Workers (tasks / actors) killed due to memory pressure (OOM), 0 Workers crashed due to other reasons at node (ID: 44341fa36bc962e663a189d9568a9b9dc7495b02fe9d672a341cc3d7, IP: xxx) over the last time period. To see more information about the Workers killed on this node, use `ray logs raylet.out -ip xxx`

观察到内存占用到了600GB+后OOM了,感觉zero-offload optimizer的内存占用不太正常。 正常需要多少内存能跑起来呢?

wuxibin89 commented 5 months ago

Adam优化器需要保持参数(parameter)、一阶冲量(momentum)、二阶冲量(variance),都是以fp32精度保存,因此所需的内存是参数量的12倍。对于70B模型,CPUAdam需要840GB的CPU内存。

LSC527 commented 5 months ago

感谢回答。 那么按这个train_ppo_llama_ray_70b.sh的配置,如果70b的actor和70b的critic放置到同一个node上就需要1680GB的内存? 增加节点数量应该可以降低cpu adam的内存占用?(cpu adam的内存占用可以通过zero来parition到不同node?)

wuxibin89 commented 5 months ago

那么按这个train_ppo_llama_ray_70b.sh的配置,如果70b的actor和70b的critic放置到同一个node上就需要1680GB的内存?

train_ppo_llama_ray_70b.sh这个配置,actor和critic应该会调度到两个node上,所以每个node至少需要840GB的内存。

增加节点数量应该可以降低cpu adam的内存占用?(cpu adam的内存占用可以通过zero来parition到不同node?)

是的,zero-3可以将optimizer state划分到多个node。

LSC527 commented 5 months ago

好的,感谢回答!