OpenLLMAI / OpenRLHF

An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & Mixtral)
https://openrlhf.readthedocs.io/
Apache License 2.0
1.71k stars 160 forks source link

An error occurred during supervisied fine-tuning. #338

Open hehebamei opened 2 days ago

hehebamei commented 2 days ago

Traceback (most recent call last): rank2: File "/home/xxx/project/OpenRLHF/examples/scripts/../train_sft.py", line 180, in

rank2: File "/home/xxx/project/OpenRLHF/examples/scripts/../train_sft.py", line 89, in train rank2: (model, optim, scheduler) = strategy.prepare((model, optim, scheduler)) rank2: File "/home/xxx/.local/lib/python3.10/site-packages/openrlhf/utils/deepspeed.py", line 157, in prepare

rank2: File "/home/xxx/.local/lib/python3.10/site-packages/openrlhf/utils/deepspeed.py", line 167, in _ds_init_trainmodel rank2: engine, optim, , scheduler = deepspeed.initialize( rank2: File "/home/xxx/anaconda3/envs/openrlhf/lib/python3.10/site-packages/deepspeed/init.py", line 157, in initialize rank2: config_class = DeepSpeedConfig(config, mpu) rank2: File "/home/xxx/anaconda3/envs/openrlhf/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 782, in init

rank2: File "/home/xxx/anaconda3/envs/openrlhf/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 961, in _configure_train_batch_size

rank2: File "/home/xxx/anaconda3/envs/openrlhf/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 909, in _batch_assertion rank2: assert train_batch == micro_batch grad_acc self.world_size, ( rank2: AssertionError: Check batch related parameters. train_batch_size is not equal to micro_batch_per_gpu gradient_acc_step world_size 128 != 2 9 7

hehebamei commented 2 days ago

I used the conda to build environment...

hijkzzz commented 2 days ago

We only test the codes within the docker container...