Open hehebamei opened 2 days ago
Traceback (most recent call last): rank2: File "/home/xxx/project/OpenRLHF/examples/scripts/../train_sft.py", line 180, in
rank2: File "/home/xxx/project/OpenRLHF/examples/scripts/../train_sft.py", line 89, in train rank2: (model, optim, scheduler) = strategy.prepare((model, optim, scheduler)) rank2: File "/home/xxx/.local/lib/python3.10/site-packages/openrlhf/utils/deepspeed.py", line 157, in prepare
rank2: File "/home/xxx/.local/lib/python3.10/site-packages/openrlhf/utils/deepspeed.py", line 167, in _ds_init_trainmodel rank2: engine, optim, , scheduler = deepspeed.initialize( rank2: File "/home/xxx/anaconda3/envs/openrlhf/lib/python3.10/site-packages/deepspeed/init.py", line 157, in initialize rank2: config_class = DeepSpeedConfig(config, mpu) rank2: File "/home/xxx/anaconda3/envs/openrlhf/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 782, in init
rank2: File "/home/xxx/anaconda3/envs/openrlhf/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 961, in _configure_train_batch_size
rank2: File "/home/xxx/anaconda3/envs/openrlhf/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 909, in _batch_assertion rank2: assert train_batch == micro_batch grad_acc self.world_size, ( rank2: AssertionError: Check batch related parameters. train_batch_size is not equal to micro_batch_per_gpu gradient_acc_step world_size 128 != 2 9 7
I used the conda to build environment...
We only test the codes within the docker container...
Traceback (most recent call last): rank2: File "/home/xxx/project/OpenRLHF/examples/scripts/../train_sft.py", line 180, in
rank2: File "/home/xxx/project/OpenRLHF/examples/scripts/../train_sft.py", line 89, in train rank2: (model, optim, scheduler) = strategy.prepare((model, optim, scheduler)) rank2: File "/home/xxx/.local/lib/python3.10/site-packages/openrlhf/utils/deepspeed.py", line 157, in prepare
rank2: File "/home/xxx/.local/lib/python3.10/site-packages/openrlhf/utils/deepspeed.py", line 167, in _ds_init_trainmodel rank2: engine, optim, , scheduler = deepspeed.initialize( rank2: File "/home/xxx/anaconda3/envs/openrlhf/lib/python3.10/site-packages/deepspeed/init.py", line 157, in initialize rank2: config_class = DeepSpeedConfig(config, mpu) rank2: File "/home/xxx/anaconda3/envs/openrlhf/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 782, in init
rank2: File "/home/xxx/anaconda3/envs/openrlhf/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 961, in _configure_train_batch_size
rank2: File "/home/xxx/anaconda3/envs/openrlhf/lib/python3.10/site-packages/deepspeed/runtime/config.py", line 909, in _batch_assertion rank2: assert train_batch == micro_batch grad_acc self.world_size, ( rank2: AssertionError: Check batch related parameters. train_batch_size is not equal to micro_batch_per_gpu gradient_acc_step world_size 128 != 2 9 7